Вы находитесь на странице: 1из 76

Azure, Flutter, GraphQL, Vue, NuGet

SEP
OCT
2019
codemag.com - THE LEADING INDEPENDENT DEVELOPER MAGAZINE - US $ 8.95 Can $ 11.95

Design Patterns
for Distributed
Systems

Implementing GraphQL APIs


VUE.js for jQuery Developers
Azure Machine Learning
MGM GRAND LAS VEGAS, NV
NOVEMBER 18 – 21, 2019

SCOTT ERIC
GUTHRIE BOYD
Executive Vice President, Corporate
Cloud + AI Platform, Vice President,
Microsoft AI Platform, Microsoft

SCOTT SCOTT
HANSELMAN HUNTER
Principal Program Director of Program
Manager, Web Platform Management .NET,
Team, Microsoft Microsoft

GET THE
JEFF JOHN
FRITZ PAPA
Senior Program Principal Developer
Manager, Microsoft Advocate, Microsoft

INSIDER VIEW

BOB KATHLEEN ANNA ROBERT


WARD DOLLARD THOMAS GREEN
Principal Architect Azure Principal Program Data & Applied Scientist, Technical Evangelist,
Data/SQL Server Team, Manager, Microsoft Microsoft DPE, Microsoft
Microsoft

DEVintersection.com 203-264-8220 M-F, 9-4 EDT AzureAIConf.com


ASP.NET * Visual Studio * Azure * Artificial Intelligence * .NET Core * Angular
Architecture * Azure Databricks * Azure IoT * Azure Sphere * Big Data * Blazor * C# 8 * Cloud Security
Cognitive Services * CosmosDB * Data Science & VMs Deep Learning * DevOps Docker * IoT * Kubernetes
Machine Learning * Microservices * Node.js * Python * React Security & Compliance
Scalable Architectures * SignalR Core * SQL Server * Visual Studio * Xamarin * and so much more
200+ Sessions 100+ Microsoft and industry experts Full-day workshops Evening events

RICHARD DAN MARKUS


CAMPBELL WAHLIN EGGER
Host, .NET Rocks! Google GDE, President and Chief
Entrepreneur, Advisor, Developer, Software Architect,
Rabid Podcaster Wahlin Consulting EPS Software Corp.

ZOINER MICHELE L. KIMBERLY L.


TEJADA BUSTAMANTE TRIPP
CEO & Architect, CIO & Architect, President / Founder,
Solliance Solliance SQLskills

REGISTER EARLY for a WORKSHOP PACKAGE


and receive a choice of Surface Go,
Xbox One X, Xbox One S, Surface Headphones,
Cortana-enabled Amazon Echo or hotel gift card!
See website for details.

Follow us on: twitch.tv/devintersection


Twitter: @DEVintersection Facebook.com/DEVintersection LinkedIn.com/company/devintersectionconference/
Twitter: @AzureAIConf Facebook.com/MicrosoftAzureAIConference LinkedIn.com/company/microsoftazureaiconf/

Powered by
DEVintersection.com
203-264-8220 m–f, 9-4 edt
TABLE OF CONTENTS

Features
8 
Azure Machine Learning Workspace and MLOps 46 
Nest.js Step-by-Step: Part 2
It’s when you’re working with lots of data that you start looking around Bilal continues showing us just how interesting, useful, and easy it is
for an easier way to keep track of it all. Machine learning and artificial to integrate Nest.js with TypeORM. You’ll get to replace mock data from
intelligence are the obvious answers, and Sahil shows you why. the first article with real data this time, too.
Sahil Malik Bilal Haidar

16 
A Design Pattern for Building WPF Business Apps: 54 
Cross-Platform Mobile Development Using
Part 3 Flutter
In the third installment of his WPF series, Paul shows you how to get Using Flutter, Google’s latest cross-platform framework for developing
feedback using an Entity Framework entity class. He also shows you how to iOS and Android apps, Wei-Meng shows you how easy developing
start expanding user activities, like adding, editing, or deleting screens. mobile-apps can be.
Paul D. Sheriff Wei-Meng Lee

24 
Responsible Package Management in Visual 70 
Add File Storage to Azure App Services: The
Studio Work Around
If you use a package management tool, like NuGet, Node Package Manager When maintaining the hierarchy of a file system and integrating
(NPM) for JavaScript, or Maven for Java, you already know how they security limits you to a single point of access, you might have some
simplify and automate library consumption. John shows you how to make heavy lifting to do while you wait for Microsoft to supply a tool to
sure that the packages you download don’t cause more troubles than they automate this task. Mike and his team found a great work-around that
solve. will keep you happy until the tool is available.
John V. Petersen Mike Yeager

30 
Moving from jQuery to Vue

Columns
Even if you don’t need the enormity of a SPA, you don’t have to lose
the benefits of a framework. Shawn recommends using Vue to simplify
the code and make it both more reliable and more testable.
Shawn Wildermuth
74 Managed Coder: On Time
36 
Intro to GraphQL for .NET Developers: Schema, Ted Neward

Resolver, and Query Language


Peter introduces you to GraphQL so your REST API client list can grow
and change without a lot of pain. You can use strongly typed schema,
eliminated over- and under-fetching, and you can get analytics about how
clients are really using your API.
Departments
Peter Mbanugo
6 Editorial
42 
Design Patterns for Distributed Systems
Stefano explores using containers for reusable components and patterns to 38 Advertisers Index
simplify making reliable distributed systems. He leans on microservices to
place all functionality within a single application.
Stefano Tempesta 73 Code Compilers

US subscriptions are US $29.99 for one year. Subscriptions outside the US pay US $49.99. Payments should be made in US dollars drawn on a US bank. American Express,
MasterCard, Visa, and Discover credit cards are accepted. Bill Me option is available only for US subscriptions. Back issues are available. For subscription information,
send e-mail to subscriptions@code-magazine.com or contact Customer Service at 832-717-4445 ext. 10.
Subscribe online at www.code-magazine.com
CODE Component Developer Magazine (ISSN # 1547-5166) is published bimonthly by EPS Software Corporation, 6605 Cypresswood Drive, Suite 300, Spring, TX 77379 U.S.A.
POSTMASTER: Send address changes to CODE Component Developer Magazine, 6605 Cypresswood Drive, Suite 300, Spring, TX 77379 U.S.A.

4 Table of Contents codemag.com


EDITORIAL

Code Smells Are Universal


Over the years, I’ve become fluent in several programming languages: C#, JavaScript, Visual Basic .NET,
Ruby, FoxPro, and a few others. Last month, I started the process of adding Python to my repertoire
because my development team is currently in the process of building a data processing platform.

This platform pulls data from multiple sources of At first blush, this was a good sign. This code “smells” I know that THIS is not an interesting story. The
data and uses Python (with its rich ecosystem of rather nice. Upon further digging, I found some code interesting part is that I was able to identify a
statistical libraries), to run various models over that has a distinctly unpleasant odor. The main pro- code smell in an unfamiliar programming lan-
the data. I was tasked with integrating these Py- gram accepted a number of dynamic command argu- guage. You see: Code Smells are Universal. Let’s
thon modules into our ETL pipeline, so I asked ments. These parameters read and assigned to dif- take a look at some JavaScript code used to vali-
the data analyst for a copy of the code to deter- ferent memory variables. Okay, so far so good. Where date the format of a date string in Figure 1. For
mine first, how it works and second, how I was was the smell? The smell came from a called module reference, the correct format of the string is as
going to integrate this code into our pipeline. that reread the command line arguments: follows: 1977-05-25 01:30 pm

I spent some time with the developer. The smell start_date = “’%s’” % sys.argv[5] This code has several different smells. First, it
of the code became apparent rather quickly. When end_date = “’%s’” % sys.argv[6] has a bit of stinker code in that it uses brute force
developing the code, the analyst implemented a to validate a date time string. Can you think of
metadata-driven approach to loading and running It didn’t look correct to me. It shouldn’t be the better ways to write this validation? The first idea
modules for each client. The application looked up job of the called program to reread the command- that comes to mind is that this code could prob-
the client code and used the parameters attached line parameters from the calling modules argu- ably be handled by a regular expression. So, does
to that client to make it simple to maintain. ment list. This was a definite smell to me. this code have a bad or a good smell?

When it comes to code, whether it has a good or bad


smell is a subjective thing. This code is probably a
mix of both. The bad smell comes from its brutish
nature. It basically validates each character one at
a time. The good part is the intention of the code;
when an error does occur, the code tells the user
EXACTLY what’s wrong with the time string.

Finally, other smells can be determined by an-


swering the following questions:

• Does the code work as designed?


• Is the code maintainable?
• Is the code understandable?

In my judgement, the answers to these questions


for this bit of code is yes. Even if you don’t write
a lot of JavaScript code, can you decide for your-
self whether the code is any good or not? What
comments would you make about this code? Tell
you what. Ping me at @rodpaddock on Twitter.
I’d love to hear your comments about this code,
good or bad. Please be kind though.

After spending some time thinking about the Py-


thon code, I came to the realization that most
programming falls back on the old premise: It’s
the concept that matters. By spending time mas-
tering concepts, I’ve been able to master mul-
tiple languages. And now I’ve also found a new
superpower: the ability to look at code in unfa-
miliar languages and determine whether or not it
has code smell, both good and bad.

 Rod Paddock

Figure 1: Validating the format of a date string.

6 Editorial codemag.com
ONLINE QUICK ID 00
ADVERTORIAL

Screen Grabber Pro: The Best Screen Recorder


Record screen activities easily with an all-purpose desktop recorder.

Looking for a simple yet innovative way to capture video demos, gaming activities, and video tutorials from your PC? All you need is Acethinker Screen Grabber
Pro. Acethinker Screen Grabber Pro is a premiere screen and audio recording software that’s supported by both Windows and MacOS. It’s designed to provide
optimum performance in recording high-quality videos/audios, regardless what type of recording situation is. The tool is especially useful for gaming videos
with long duration, and comprehensive video demonstrations. All of these features are included within a single payment option which varies, depending on
the plan that suits the needs of the users. Learn more digital solutions from Acethinker, please visit Acethinker’s website at https://acethinker.com/.

Why Acethinker Screen Grabber Pro?


• Record all desktop activities: Equipped with different recording
modes, AceThinker Screen Grabber Pro can record the entire screen
area, a specific area, an application window, and more. Aside from
the desktop screen, the tool can also capture audio from the sys-
tem and microphones simultaneously. This is essential for people
who make instructional videos as they can incorporate audio directly
onto the video.

• Create scheduled task: The tool has a task scheduler option that
enables the users to set a specific time to record automatically. This
is an efficient way to record live-streams, webinars, or the Inter-
net activity of your kids, and to schedule regular recordings even if
you’re not around.

• Edit video during and after recording: Annotate while recording


with the built-in editing panel of the tool. There are various video
enhancement options available that can be added as the recording
progresses. This enables you to process the video easily and saves a
lot of time and effort in post-editing.

• Save and share screencast: After recording the video, you can con-
vert the recorded videos into desired formats for watching on vari-
ous devices. You can also upload them to a cloud server or share your
videos on websites like YouTube and more.

About AceThinker Software


AceThinker Limited was established in 2015 and continues to provide digital multimedia solutions to many households and
businesses. Over the years, Acethinker Limited steadily gained popularity by releasing essential multimedia tools that provide
different solutions to various situations. Acethinker Screen Grabber Pro is the premiere offering of AceThinker Limited since
its launch. To learn more about the software, please visit https://acethinker.com/desktop-recorder or scan the QR code with
your smart phone.

7
codemag.com
FIND OUT MORE AT ACETHINKER.COM/DESKTOP-RECORDER Title article
ONLINE QUICK ID 1909021

Azure Machine Learning Workspace


and MLOps
In my previous article (https://www.codemag.com/Article/1907021/Azure-Machine-Learning-Service), I discussed the Azure
Machine Learning Service. The Azure Machine Learning Service is at the core of custom AI. But what really ties it together is
the Azure Machine Learning workspace. The process of AI involves working with lots of data, cleaning the data, writing and

running experiments, publishing models, and finally col- It needs a storage account where it stores details of runs,
lecting real-world data and improving your models. The ma- experiments, logs etc. It needs application insights to pro-
chine learning workspace provides you and your co-workers vide you with an inflight recorder. It uses a key vault and
with a collaborative environment where you can manage managed identities to securely talk to all resources it needs.
every aspect of your AI projects. You can also use role-based Behind the scenes, you’ll also see service principals back-
security to define roles within your teams, you can check ing the managed identities. You shouldn’t be changing the
historical runs, versions, logs etc., and you can even tie it permissions of those service principals manually or you’ll
to your Azure DevOps repos and fully automate this process ruin it all.
via ML Ops.
Sahil Malik As you continue to use your machine learning workspace,
www.winsmarts.com In this article, I’ll introduce you to all of these and more. you’ll notice that new resources get created or removed.
@sahilmalik You’ll especially see loads of resources appear when you
provision an AKS cluster to serve your models.
Sahil Malik has been a Provision an ML Workspace
15-year Microsoft MVP,
Creating an ML workspace is extremely easy. Log into portal.
INETA speaker, a .NET author,
azure.com using an account with a valid Azure subscription, Walkthrough of the ML Workspace
consultant and trainer.
search for Machine Learning Service Workspace, and click At this time, you’ve only created a workspace; you haven’t
Sahil loves interacting with on the Create button in the provided blade. You’ll be asked yet put anything in it. So before you go much further, let’s
fellow geeks in real time. to provide a name; for the purposes of this article, choose examine the major components of the ML workspace. I
His talks and trainings are to create it in a new resource group. The names I picked won’t dive into every single aspect here, but just focus on
full of humor and practical were sahilWorkspace for the name of the workspace and ML the interesting major players. Go ahead and visit the work-
nuggets. You can find for the name of the resource group. And in just about a min- space. Within the workspace you should see a section like
him at @sahilmalik or ute or so, your Azure Machine Learning service is created. that shown in Figure 2.
on his website at
https://www.winsmarts.com You may also create an Azure Machine Learning service As can be seen in Figure 2, the Activity Log is a great place
workspace using the Azure CLI. In order to do so, you first to learn what activities have been performed in the work-
must install the Azure CLI machine learning extension using space. Remember, you’re not the only one using this work-
the command: space—it’s a collaborative area that you share with your
co-workers. When an experiment goes awry and starts giv-
az extension add -n azure-cli-ml ing out awful results, this is where you can go and find out
exactly what happened recently.
You can then create an Azure Machine Learning workspace
like this: Remember, AI projects need to be secured just like any other
project. Perhaps even more so, because as we move forward
az group create -n ML -l eastUS in time, we will rely more, not less, on AI. In fact, AI systems
az ml workspace create -w sahilWorkspace -g ML will be used to hack non-AI systems, such as your friendly
local powerplant. It’s crucial that you know and preserve a
Once the workspace is created, you’ll notice a number of history of activities going on in your environment.
newly created resources in your subscription, as can be seen
in Figure 1. The second interesting thing you see here is the Access
Control (IAM) section. Azure Machine Learning workspace
As you can see from Figure 1, the Azure Machine Learning relies on the usual Azure Identity and Access Management
workspace depends on a number of other services in Azure. (IAM) to secure resources and provide resources. You can
define your own roles as well, but the Azure Machine Learn-
ing workspace comes with numerous useful prebuilt roles.
For instance, you don’t want just anyone to deploy a model,
right? Additionally, perhaps you want the log readers, well,
to just read—not edit, not even accidentally—the experi-
ment. All of this can be neatly tied down using regular Azure
IAM.

Perhaps a superfluous point here is that the Azure Machine


Figure 1: Newly created resources after you provision an ML workspace Learning workspace is part of the Azure portal. It’s there-

8 Azure Machine Learning Workspace and MLOps codemag.com


Listing 1: The regression experiment
from sklearn.datasets import load_diabetes alphas = mylib.get_alphas()
from sklearn.linear_model import Ridge
from sklearn.metrics import mean_squared_error for alpha in alphas:
from sklearn.model_selection import train_test_split # Use Ridge algorithm to create a regression model
from azureml.core.run import Run reg = Ridge(alpha=alpha)
from sklearn.externals import joblib reg.fit(data["train"]["X"], data["train"]["y"])
import os
import numpy as np preds = reg.predict(data["test"]["X"])
import mylib mse = mean_squared_error(preds, data["test"]["y"])
run.log('alpha', alpha)
os.makedirs('./outputs', exist_ok=True) run.log('mse', mse)

X, y = load_diabetes(return_X_y=True) model_file_name = 'ridge_{0:.2f}.pkl'.format(alpha)


# save model in the outputs folder
run = Run.get_context() with open(model_file_name, "wb") as file:
joblib.dump(value=reg,
X_train, X_test, y_train, y_test = filename=os.path.join('./outputs/',
train_test_split(X, y, test_size=0.2, model_file_name))
random_state=0)
data = {"train": {"X": X_train, "y": y_train}, print('alpha is {0:.2f},
"test": {"X": X_test, "y": y_test}} and mse is {1:0.2f}'.format(alpha, mse))

fore protected by your Azure AD and gains all the benefits of First, attach yourself to the resource group and folder. This
Azure AD, such as MFA, advanced threat protection, integra- command isn’t 100% necessary, but it’ll help by not requir-
tion with your corporate on-premises identities, etc. ing you to specify the resource group and folder over and
over again every time you wish to execute a command.

az ml folder attach -w sahilWorkspace -g ML


The Azure Machine Learning
Once you’ve run the above command, you can now go ahead
workspace is part of the Azure and request to have an Azure ML compute resource created
portal and therefore protected for you. Note that a compute resource comes in many shapes Figure 2: Left hand navigation
by your Azure AD. and sizes. Here, you’re creating a standard VM compute with of the Azure Machine Learning
one node. You can create this resource using this command: workspace

az ml computetarget create
amlcompute -n mycomputetarget
Publish and Deploy Using Azure CLI --min-nodes 1 --max-nodes 1
The next important section is the assets section, as can be -s STANDARD_D3_V2
seen in Figure 3.
It’s worth pointing out that the ML workspace gives you full
This area is where you can view and manage your actual control over virtual network settings, so you can keep this
work: your experiments, your models, the compute you pro- compute resource or associated storage accounts etc. in their
vision, etc. To understand this section better, let’s publish own virtual network, away from the prying eyes of the Inter-
and run an experiment and see the entire process end-to- net. Your InfoSec team will probably be happy to hear that
end. their valuable and sensitive training data will always be secure.

Create a Model Once the above command finishes running, you should see a
Remember that for the purposes of this article, the actual ex- compute resource provisioned for you, as shown in Figure 4.
periment is unimportant. The same instructions apply to any Figure 3: The assets section
kind of problem you may be attempting to solve. I’ll use an The name of the compute resource is important. Now I wish to of the Azure machine
openly available diabetes dataset that’s available at https:// be able to submit my experiment and in order to submit it, I learning workspace
www4.stat.ncsu.edu/~boos/var.select/diabetes.tab.txt. This
dataset includes: ten baseline variables, age, sex, body mass
index, average blood pressure, and six blood serum measure-
ments that were obtained for each of n = 442 diabetes pa-
tients, as well as the response of interest, a quantitative mea-
sure of disease progression one year after baseline. Using this
data, I can create a simple regression model to predict the
progression of the disease in a patient given the ten baseline
variables about the patient. The code for this experiment is
really straightforward and can be seen in Listing 1.

The next step is to submit this as an experiment run. You can


do so easily using the portal Azure ML SDK or via the Azure
CLI. I’ll show you how to do this using the Azure CLI. Figure 4: The newly created compute

codemag.com Azure Machine Learning Workspace and MLOps 9


Listing 2: The sklearn.runconfig file
{ "azureml-defaults"
"script": "train-sklearn.py", ]
"framework": "Python", }
"communicator": "None", ]
"target": "mycomputetarget", }
"environment": { },
"python": { "docker": {
"interpreterPath": "python", "baseImage":
"userManagedDependencies": false, "mcr.microsoft.com/azureml/base:0.2.4",
"condaDependencies": { "enabled": true,
"dependencies":[ "gpuSupport": true
"python=3.6.2", }
"scikit-learn", }
{ }
"pip":[

Listing 3: The dependencies file training-env.yml az ml run submit-script


name: project_environment -c sklearn -e test
dependencies: -d training-env.yml
- python=3.6.2 train-sklearn.py
- pip:
- azureml-defaults
- scikit-learn By running the above command, you’ll get a link to a Web
- numpy view where you can track the status of the submitted run.
At this time, you can just wait for this command to finish, or
observe the status of the run under the “Experiments” tab
under your ML workspace.
need to supply a configuration. This configuration file resides
in the .azureml folder in a file called sklearn.runconfig. You Once the run completes, notice that the ML workspace au-
can see my sklearn.runconfig in Listing 2. Of special note in tomatically stores a lot of details for the run, as can be seen
Listing 2, is the value of “target”. Look familiar? That’s the in Figure 5.
name of the compute target you created earlier.
Here are some of the details that the Azure ML workspace
You also need to provide the necessary dependencies your automatically keeps of a track of for you.
experiment depends on. I’ve chosen to provide those in a
file called training-env.yml, the contents of which can be It stores all the runs, along with who initiated them, when it
seen in Listing 3. was run, and whether or not it succeeded. It also plots the met-
rics as charts for you, so you can visually tell the output of a run.
Assuming that you have a config.json in your .azureml folder
pointing to the requisite subscription and ML workspace, you Under the outputs tab, it stores all logs and outputs. The
can submit an experiment using the following command. outputs can be the models, for instance. And finally, as you
saw in Figure 5, it stores a snapshot of what was run to pro-
duce those outputs, so you have a snapshot in time of what
you’re about to register and deploy next.

Register a Model
In the tabs shown in Figure 5, under the Outputs tab, you
can find the created models. Go ahead and download any one
of the models, which should be a file ending in .pkl. The next
thing you need to do is use this file and register the model.

In order to register the model, you can use either the ML


SDK, Azure CLI, or do it directly through the browser UI. If
you choose to do this using Azure CLI, you can simply use
the following command:

az ml model register -n mymodel -p sklearn_regression_model.


pkl -t model.json

This command relies on three inputs. First is the name of


the model you’re creating, which is mymodel. The model file
itself is sklearn_regression_model.pkl. The model.json file
is a simple JSON file describing the version and workspace
for the model. It can be seen here:

{
Figure 5: Details of the run "modelId": "mymodel:2”,

10 Azure Machine Learning Workspace and MLOps codemag.com


Listing 4: The scoring file
import json model = joblib.load(model_path)
import numpy as np
from sklearn.externals import joblib input_sample =
from sklearn.linear_model import Ridge np.array([[10, 9, 8, 7, 6, 5, 4, 3, 2, 1]])
from azureml.core.model import Model output_sample = np.array([3726.995])
@input_schema('data', NumpyParameterType(input_sample))
from inference_schema.schema_decorators @output_schema(NumpyParameterType(output_sample))
import input_schema, output_schema def run(data):
from try:
inference_schema.parameter_types.numpy_parameter_type result = model.predict(data)
import NumpyParameterType return result.tolist()
except Exception as e:
def init(): error = str(e)
global model return error
model_path = Model.get_model_path('mymodel')

"workspaceName": "sahilWorkspace”, Listing 5: The inference config file


"resourceGroupName": "ML” entryScript: score.py
} runtime: python
condaFile: scoring-env.yml
Once you run the Azure CLI command successfully, you extraDockerfileSteps:
schemaFile:
should see the model registered, as can be seen in Figure 6. sourceDirectory:
enableGpu: False
Deploy a Model baseImage:
Now that you have a model, you need to convert it into baseImageRegistry:
an API so users can call it and make predictions. You can
choose to run this model as a local instance for develop-
ment purposes. Or you can choose to run that container as Listing 6: The deployment configuration file
an Azure container instance for QA testing purposes, or as a ---
an AKS cluster for production use. containerResourceRequirements:
cpu: 1
memoryInGB: 1
There are three things you need to deploy your model: computeType: ACI

• The entry script, which contains the scoring and


monitoring logic. This is simply a Python file with two
methods in it. One is to load the model as a global
object and the other is to serve predictions. You can
see the scoring file entry script in Listing 4.
• The inference config file, which has various configura-
tion information such as: what is the runtime location,
what dependencies are you using, etc. You can see the
inference configuration I’m using in Listing 5.
• The deployment configuration, which contains infor-
mation about where you’re deploying this endpoint to
and under what configuration. For instance, if you’re
deploying to an Azure container instance or an Azure
Kubernetes cluster, you’d include that information Figure 6: Our newly registered model
here. You can see the deployment configuration I‘m
using in Listing 6.

The following command will deploy your model to an ACI


instance:

az ml model deploy
-n acicicd
-f model.json
--ic inferenceConfig.yml
--dc aciDeployment.yml
--overwrite

Once you run the above command, you should see an image Figure 7: A newly created image.
created for you, as you can see in Figure 7.

In each such created image, you’re able to see the specific thenticates to it using a service principal. You can have more
location on which the image resides. This is usually an auto- than one deployment per image, and you can track that in
provisioned Azure container registry, and the workspace au- the properties of the created image as well.

codemag.com Azure Machine Learning Workspace and MLOps 11


Additionally, you can find a new deployment created for
you, as can be seen in Figure 8.

For each deployment, the workspace allows you to track


which model the deployment is from and when it was cre-
ated or updated. This way, you can completely back-trace it
to which experiment version and dataset the model came
from, and who deployed it. At any point, you can choose
to update the deployment, and it will track these changes
also.

Finally, as you can see in Figure 9, you can grab the scoring
URI for your newly deployed model. It’s this scoring URI
Figure 8: Newly created deployment that your clients can make POST requests to, in order to
make predictions against your model.

Automating Using ML Ops


So far in this article, I’ve shown you how to use Azure CLI to run
an experiment, create a model, create an image, and deploy a
model. In this process, I demonstrated all of the value that
Azure Machine Learning workspace adds to the overall process.

But at the center of any AI project is lots of data and algo-


rithms. Data is usually managed in some sort of data store,
it could be anything, as long as your code can talk to it. But
the brain trust is in the algorithms. The algorithms are writ-
ten as code, usually Jupyter notebooks. And like any other
project, you’ll need to source-control them.

Like any other project,


you’ll need to source-control
algorithms.

A great way to manage any software project is Azure DevOps.


It lets you manage all aspects of a software project. Issues
are a big part of DevOps, sprint planning is another, and
source control is also an important aspect. A rather inter-
esting aspect of DevOps is pipelines. Pipelines let you au-
Figure 9: The scoring URI tomate the process of building and releasing your code via

Figure 10: The Azure Resource Manager Service connection

12 Azure Machine Learning Workspace and MLOps codemag.com


steps. All of these important facets, code, sprints, issues,
SPONSORED SIDEBAR:
and pipelines can work together with each other.
®
Moving to Azure?
An AI project is just like any other software project. It needs CODE Can Help!
code, it needs data, it needs issue tracking, it needs testing,

Instantly Search
it needs automation. And DevOps can help you automate Microsoft Azure is a robust
this entire process, end to end. and full-featured cloud

Terabytes
platform. Take advantage
For AI specifically, you can use MLOps to automate every- of a FREE hour-long CODE
thing you’ve seen in this article so far, via a DevOps pipe- Consulting session (yes,
line. For MLOps to work, there are four main things you need FREE!) to jumpstart your
to do. organization’s plans to
develop solutions on the
First, you need to get your code into the DevOps reposi- Microsoft Azure platform.
tory. This is not 100% necessary, because DevOps can work
For more information dtSearch’s document filters
with other source control repositories. However, let’s just
visit www.codemag.com/ support:
consulting or email us at
say that you get your code in some source code repository info@codemag.com. • popular file types
that DevOps can read from, and because DevOps does come
with a pretty good source control repository, perhaps just • emails with multilevel
go ahead and use that. attachments
• a wide variety of databases
Secondly, install the Machine Learning extension in your
DevOps repo from this link https://marketplace.visualstu-
• web data
dio.com/items?itemName=ms-air-aiagility.vss-services-
azureml.
Over 25 search options
Once this extension is installed, create a new Azure Resource including:
Manager Service connection, as can be seen in Figure 10.
• efficient multithreaded search
Provisioning this connection creates a service principal in • easy multicolor hit-highlighting
your Azure tenancy, which has the ability to provision or
• forensics options like credit
deprovision resources, as needed, in an automated fashion.
It’s this service connection, called ML that is used by the
card search
pipeline.

Finally, create a pipeline with the code as shown in Listing 7. Developers:


Let’s walk through what this pipeline is doing. The first thing
you note is that it’s using Azure CLI, and it’s doing so using • SDKs for Windows, Linux,
the service connection you created earlier. Besides that, it’s macOS
running on an Ubuntu agent. • Cross-platform APIs for
.
C++, Java and NET with
It first installs Python 3.6 and then installs all the necessary
dependencies that the code depends on. It does so using
.NET Standard / NET Core.
pip, which is a package installer for python. Then it adds • FAQs on faceted search,
the Azure CLI ML extensions. This step is necessary because granular data classification,
the agent comes with Azure CLI but doesn’t come with ML Azure and more
extensions.

It then attaches itself to the workspace and resource group.


This step could be automated further by provisioning and
deprovisioning a workspace and resource group as neces- Visit dtSearch.com for
sary.
• hundreds of reviews and
It then creates a compute target, followed by running the case studies
experiment, registering the model as an image, and creat-
ing a deployment, and when you’re done, you delete the
• fully-functional enterprise
compute so you don’t have to pay for it. and developer evaluations

All of this is set to trigger automatically if a code change The Smart Choice for Text
occurs on the master branch.
Retrieval® since 1991
The end result of all this is that as soon as someone commits
code into the master, the whole process runs in an auto- 1-800-IT-FINDS
mated fashion, and it creates a scoring URI for you to test.
You get notified of success and failure, and basically all of
www.dtSearch.com
the other facilities that Azure DevOps offers.

codemag.com Azure Machine Learning Workspace and MLOps 13


Listing 7: The DevOps pipeline
trigger: azureSubscription: 'ML'
- master scriptLocation: 'inlineScript'
inlineScript: 'az ml computetarget
pool: create amlcompute -n mycomputetarget
vmImage: 'Ubuntu-16.04' --min-nodes 1 --max-nodes 1 -s STANDARD_D3_V2'
workingDirectory: 'model-training'
steps:
- task: UsePythonVersion@0 - task: AzureCLI@1
displayName: 'Use Python 3.6' inputs:
inputs: azureSubscription: 'ML'
versionSpec: 3.6 scriptLocation: 'inlineScript'
inlineScript: 'az ml run submit-script
- script: | -c sklearn -e test
pip install flake8 -d training-env.yml train-sklearn.py'
pip install flake8_formatter_junit_xml workingDirectory: 'model-training'
flake8 --format junit-xml
--output-file - task: AzureCLI@1
$(Build.BinariesDirectory)/flake8_report.xml inputs:
--exit-zero --ignore E111 azureSubscription: 'ML'
displayName: 'Check code quality' scriptLocation: 'inlineScript'
inlineScript: 'az ml model register
- task: PublishTestResults@2 -n mymodel -p sklearn_regression_model.pkl -t model.json'
condition: succeededOrFailed() workingDirectory: 'model-deployment'
inputs:
testResultsFiles: '$(Build.BinariesDirectory)/*_report.xml' - task: AzureCLI@1
testRunTitle: 'Publish test results' inputs:
azureSubscription: 'ML'
- task: AzureCLI@1 scriptLocation: 'inlineScript'
inputs: inlineScript: 'az ml model deploy
azureSubscription: 'ML' -n acicicd -f model.json
scriptLocation: 'inlineScript' --ic inferenceConfig.yml
inlineScript: 'az extension add -n azure-cli-ml' --dc aciDeploymentConfig.yml --overwrite'
workingDirectory: 'model-training' workingDirectory: 'model-deployment'

- task: AzureCLI@1 - task: AzureCLI@1


inputs: inputs:
azureSubscription: 'ML' azureSubscription: 'ML'
scriptLocation: 'inlineScript' scriptLocation: 'inlineScript'
inlineScript: 'az ml folder attach inlineScript: 'az ml computetarget
-w sahilWorkspace -g ML' delete -n mycomputetarget'
workingDirectory: '' workingDirectory: ''

- task: AzureCLI@1
inputs:

Summary in the work to secure your artifacts end to end, but the ML
The Azure Machine Learning workspace is an incredible workspace is a great management tool.
tool for your AI projects. In a real-world AI project, you’ll
most likely work with multiple collaborators. You will have Finally, I showed you how to automate this entire process
well-defined roles. Your data will need to be kept secure end to end using an MLOps pipeline like you would do in any
and you’ll have to worry about versions. That’s versions other software project.
not just of your code but also your data, your experi-
ments, details of all your deployments, created models, Until next time!
etc.
 Sahil Malik
The Azure ML workspace automates all of this for you, and 
it records all of it behind the scenes for you as a part of
your normal workflow. Later, if your customers come and
ask you a question such as, “Hey why did you make such
prediction at such a time,” you can easily trace your steps
back to the specific deployment, specific algorithm, specific
parameters, and specific input data that caused you to make
that prediction.

Did you know that researchers once fooled a Google im-


age recognition algorithm by replacing a single picture of
a turtle, so Google would interpret it as a rifle? These kinds
of attacks are new to AI. And the ML workspace helps you
track all of this kind of thing very well. You still have to put

14 Azure Machine Learning Workspace and MLOps codemag.com


ONLINE QUICK ID 1909031

A Design Pattern for Building WPF


Business Applications: Part 3
In parts 1 and 2 of this series on building a WPF business application, you created a new WPF business application using a pre-
existing architecture. You added code to display a message while loading resources in the background. You also learned how to
load and close user controls on a main window. In part 2 of this series, you displayed a status message by sending a message

from a view model class to the main window. You also dis- Framework. The rules that fail in EF are going to be con-
played informational messages and made them disappear verted into validation messages to be displayed in the same
after a specified period. You created a WPF login screen com- manner as presented in the last article.
plete with validation.
The user feedback screen (Figure 1) places the labels above
In part 3 of this series, you’ll build a user feedback screen to each input field. The label styles in the StandardStyles.xaml
allow a user to submit feedback about the application. You file sets the margin property to 4. However, this would place
build a view model and bind an Entity Framework entity class the labels too far to the right above the input fields. You’re
to the screen. The entity class contains data annotations and going to create a new style just on this screen to move the
Paul D. Sheriff you learn to display validation messages from any data anno- margin to the left. This style overrides the global Margin
http://www.fairwaytech.com tations that fail validation. You also start learning how to build setting for labels. Open the UserFeedbackControl.xaml file
a design pattern for standard add, edit, and delete screens. and locate the <UserControl.Resources> element. Add a new
Paul D. Sheriff is a Business You build a user list control and a user detail control to display keyed style for labels.
Solutions Architect with all users in a table, and the detail for each one you click on.
Fairway Technologies, Inc. <UserControl.Resources>
Fairway Technologies is a
This article is the third in a multi-part series on how to create <vm:UserFeedbackViewModel x:Key="viewModel" />
premier provider of expert
a WPF business application. Instead of starting completely <Style TargetType="Label"
technology consulting and
from scratch, I’ve created a starting architecture that you can x:Key="feedbackLabels">
software development ser-
vices, helping leading firms learn about by reading the blog post entitled “An Architec- <Setter Property="Margin"
convert requirements into ture for WPF Applications” located at https://bit.ly/2BxpK0P. Value="0,0" />
top-quality results. Paul is Download the samples that go along with the blog post to </Style>
also a Pluralsight author. follow along step-by-step with this article. This series of arti- </UserControl.Resources>
Check out his videos at cles is also a Pluralsight.com course you may view at https://
http://www.pluralsight. bit.ly/2SjwTeb. You can also read the previous articles in the Remove the <StackPanel> with the text box and button in
com/author/paul-sheriff. May/June and July/August issues of CODE Magazine (https:// that you added in the previous article. There are two col-
www.codemag.com/Magazine/AllIssues). umns on this feedback screen; one for the large vertical
“Feedback” column, and one for all the input fields. Add a
<ScrollViewer> and a <Grid> within the <Border> as shown
Create a WPF User Feedback Screen in the following code.
Create a screen for the user to input feedback to your sup-
port department about your WPF application, as shown in <ScrollViewer VerticalScrollBarVisibility="Auto">
Figure 1. On this screen, validate the data using the Entity <Grid DataContext="{Binding
Source={StaticResource viewModel}}">
<Grid.ColumnDefinitions>
Listing 1: Build the large vertical column using a border. <ColumnDefinition Width="Auto" />
<ColumnDefinition Width="*" />
<Border Grid.Column="0"
Margin="10" </Grid.ColumnDefinitions>
CornerRadius="10">
<Border.Background> </Grid>
<LinearGradientBrush EndPoint="1,0.5" </ScrollViewer>
StartPoint="0,0.5">
<GradientStop Color="Gray"
Offset="0" /> Add a Large Vertical Column
<GradientStop Color="DarkGray" On the left side of this screen, there’s a raised area that
Offset="1" /> you build using a <Border> with a linear gradient brush, a
</LinearGradientBrush> label, and an image. Build the large vertical column using
</Border.Background>
<StackPanel> the code shown in Listing 1. Add this code just below the
<Label Content="Feedback" closing </Grid.ColumnDefinitions> element.
Style="{StaticResource inverseLabel}"
Margin="10" /> Add a Grid for Input Fields
<Image Source="pack://application:,,,/
WPF.Common;component/ The second column (on the right) of this screen is where
Images/Envelope_White.png" /> you place the area for the user to input the data. Add a new
</StackPanel> <Grid> below the closing </Border> element. Add 10 row
</Border> definitions for this new grid, as shown in Listing 2.

16 A Design Pattern for Building WPF Business Applications: Part 3 codemag.com


Add Labels and Input Fields
Below this new closing </Grid.RowDefinitions> element,
add the label and text box controls shown in Listing 3. Each
of the text box controls is bound to an Entity property that
you’re going to add to the user feedback view model class
later in this post.

Add Buttons
You need a Close button and a Send feedback button just
below the input fields. Add a <StackPanel> element, shown
below, in which to place these two buttons. After entering
this XAML, create the event procedure for the SendFeed-
backButton_Click event by pressing the F12 key while po-
sitioned over the “SendFeedbackButton_Click” text in the
Click attribute. The CloseButton_Click event procedure was
created in a previous article.

<StackPanel Grid.Row="8"
Orientation="Horizontal"
HorizontalAlignment="Right">
<Button Content="Close"
IsCancel="True"
Style="{StaticResource cancelButton}"
Click="CloseButton_Click" />
<Button Content="Send Feedback"
IsDefault="True"
Style="{StaticResource submitButton}" Figure 1: A user feedback form
Click="SendFeedbackButton_Click" />
</StackPanel>
Listing 2: Multiple rows are needed for vertical screens.
Add a List Box for Validation <Grid Grid.Column="1"
In the last row in this screen, add a list box control just like Margin="10">
you did on the Login screen in the last article. If you want, <Grid.RowDefinitions>
<RowDefinition Height="Auto" />
create this <ListBox> control as another user control and <RowDefinition Height="Auto" />
include that control on this screen and the Login screen. <RowDefinition Height="Auto" />
<RowDefinition Height="Auto" />
<!-- Validation Message Area --> <RowDefinition Height="Auto" />
<RowDefinition Height="Auto" />
<ListBox Grid.Row="9" <RowDefinition Height="Auto" />
Style="{StaticResource validationArea}" <RowDefinition Height="Auto" />
Visibility="{Binding IsValidationVisible, <RowDefinition Height="Auto" />
Converter={StaticResource <RowDefinition Height="*" />
</Grid.RowDefinitions>
visibilityConverter}}"
ItemsSource="{Binding ValidationMessages}" </Grid>
DisplayMemberPath="Message" />

Try it Out
Listing 3: Use labels and text boxes to build inputs for the user feedback screen.
Run the application, log in as a valid user, and click on the
Feedback menu item to display the screen. If you’ve done <Label Grid.Row="0"
everything correctly, the screen should look like Figure 1. Style="{StaticResource feedbackLabels}"
Content="Name" />
<TextBox Grid.Row="1"
Text="{Binding Path=Entity.Name}" />
Add UserFeedback Table <Label Grid.Row="2"
To store the data entered on this screen, you need to build Style="{StaticResource feedbackLabels}"
Content="Email Address" />
a UserFeedback table in the Sample.mdf database. NOTE: <TextBox Grid.Row="3"
This table is already in the Sample.mdf file that comes with Text="{Binding Path=Entity.EmailAddress}" />
the starting application. The following steps instruct you on <Label Grid.Row="4"
how to build the table. Style="{StaticResource feedbackLabels}"
Content="Phone Extension" />
<TextBox Grid.Row="5"
Double-click on the Sample.mdf located in the App_Data Text="{Binding Path=Entity.PhoneExtension}" />
folder to bring up the Server Explorer window. Right mouse- <Label Grid.Row="6"
click on the Tables folder and select New Query from the Style="{StaticResource feedbackLabels}"
menu. Add the following SQL code and click the Execute Content="Feedback Message" />
<TextBox Grid.Row="7"
icon. Text="{Binding Path=Entity.Message}"
AcceptsReturn="True"
CREATE TABLE [dbo].[UserFeedback] ( TextWrapping="Wrap"
[UserFeedbackId] INT IDENTITY (1, 1) NOT NULL, Height="150" />

codemag.com A Design Pattern for Building WPF Business Applications: Part 3 17


Listing 4: Create the appropriate input entity class for the user feedback screen.
using System.ComponentModel.DataAnnotations; }
using System.ComponentModel. }
DataAnnotations.Schema;
using Common.Library; [Required(ErrorMessage =
"Email Address must be filled in.")]
namespace WPF.Sample.DataLayer public string EmailAddress
{ {
[Table("UserFeedback")] get { return _EmailAddress; }
public class UserFeedback : CommonBase set {
{ _EmailAddress = value;
private int _UserFeedbackId; RaisePropertyChanged("EmailAddress");
private string _Name = string.Empty; }
private string _EmailAddress = string.Empty; }
private string _PhoneExtension = string.Empty;
private string _Message = string.Empty; public string PhoneExtension
{
[Required] get { return _PhoneExtension; }
[Key] set {
public int UserFeedbackId _PhoneExtension = value;
{ RaisePropertyChanged("PhoneExtension");
get { return _UserFeedbackId; } }
set { }
_UserFeedbackId = value;
RaisePropertyChanged("UserFeedbackId"); [Required(ErrorMessage =
} "Feedback Message must be filled in.")]
} public string Message
{
[Required(ErrorMessage = get { return _Message; }
"User Name must be filled in.")] set {
public string Name _Message = value;
{ RaisePropertyChanged("Message");
get { return _Name; } }
set { }
_Name = value; }
RaisePropertyChanged("Name"); }

Listing 5: Convert EF validation objects to ValidationMessage objects • Create an entity class named UserFeedback.
public List<ValidationMessage> • Add a DbSet property to the SampleDbContext class.
CreateValidationMessages( • Add validation code to ensure that good data is en-
DbEntityValidationException ex) tered in the UserFeedback table.
{
List<ValidationMessage> ret =
new List<ValidationMessage>(); Add UserFeedback Entity Class
Open the WPF.Sample.DataLayer project and right mouse-
// Retrieve the error messages from EF click on the EntityClasses folder and select Add > Class… from
foreach (DbValidationError error in the menu. Enter the name UserFeedback and click the OK but-
ex.EntityValidationErrors
.SelectMany(x => x.ValidationErrors)) {
ton. Replace the contents of the file generated with the code
ret.Add(new ValidationMessage { shown in Listing 4. This code is a standard EF entity class to
Message = error.ErrorMessage, map properties to the columns in the SQL table. Feel free to
PropertyName = error.PropertyName use the EF generation tools to generate this code if you want.
}); Be sure to add the ErrorMessage property to the [Required]
}
attributes so you can display a user-friendly error message if
return ret; the user doesn’t provide the required data.
}
Update the SampleDbContext Class
For EF to select records from and modify data in the User-
[Name] NVARCHAR (50) NOT NULL, Feedback table, add a DbSet property in the SampleDbCon-
[PhoneExtension] NVARCHAR (10) NULL, text class. Open the SampleDbContext.cs file and add a new
[Message] NVARCHAR (MAX) NOT NULL, DbSet property.
[EmailAddress] NVARCHAR (255) NOT NULL,
PRIMARY KEY CLUSTERED ([UserFeedbackId] ASC) public virtual DbSet<UserFeedback>
); UserFeedbacks { get; set; }

Go back to the Server Explorer window and right mouse-click Add a Method to Convert EF Validation Errors
on the Table folder and select the Refresh menu to see the to ValidationMessage Objects
new table. The Entity Framework uses data annotation attributes to
generate validation errors automatically for you. It raises
an error that contains a collection of validation errors. The
Add User Feedback to the Data Layer structure of this collection doesn’t lend itself well to data
Once you have the new table created in the database, you binding on a WPF screen, so write a method to convert these
need to perform three more steps to interact with this table. validation errors to a collection of ValidationMessage ob-

18 A Design Pattern for Building WPF Business Applications: Part 3 codemag.com


jects. Add a few using statements at the top of the Sam- form this. This article isn’t going to cover writing that code, but
pleDbContext.cs file, as shown below. use the code shown in Listing 7 to save the data and display
an informational message that the feedback message was sent.
using System.Collections.Generic;
using System.Data.Entity.Validation;
using System.Linq; Update Code Behind
using Common.Library; Open the UserFeedbackControl.xaml.cs file and locate the
SendFeedbackButton_Click() event procedure. Call the
Add a method to the SampleDbContext class, Listing 5, to SendFeedback() method from this event.
perform the conversion of the EF validation errors into a
collection of ValidationMessage objects. This method takes private void SendFeedbackButton_Click(
the ErrorMessage and PropertyName properties from the object sender, RoutedEventArgs e)
Entity Framework object and assigns them to a new Valida- {
tionMessage object. This object is added to a list of Valida- // Send/Save Feedback
tionMessage objects that’s returned from this method. _viewModel.SendFeedback();
}

Modify the User Feedback View Try It Out


Model Class Run the application and click on the Feedback menu item.
Add a property named Entity to the UserFeedbackViewModel Click the Send Feedback button without entering any data
class to hold the data input on the screen. A Save() method to ensure that the validation is working. Next, enter some
is needed to submit the data to the database. You’re going
to also add a stub of a SendFeedback() method in case you
want to email the feedback to your support department. Listing 6: The Save() method adds user feedback and reports validation errors
public bool Save()
Add the Entity Property {
Open the UserFeedbackViewModel.cs file and add the fol- bool ret = false;
lowing using statements at the top of this file. SampleDbContext db = null;

try {
using System; db = new SampleDbContext();
using System.Collections.ObjectModel; // Add user feedback to database
db.UserFeedbacks.Add(Entity);
using System.Data.Entity.Validation; db.SaveChanges();
using WPF.Sample.DataLayer;
ret = true;
Add the Entity property that’s of the type UserFeedback to the }
catch (DbEntityValidationException ex) {
UserFeedbackViewModel class, as shown in the code below. ValidationMessages = new
ObservableCollection<ValidationMessage>(
private UserFeedback _Entity = db.CreateValidationMessages(ex));
new UserFeedback(); IsValidationVisible = true;
}
catch (Exception ex) {
public UserFeedback Entity PublishException(ex);
{ }
get { return _Entity; } return ret;
set { }
_Entity = value;
RaisePropertyChanged("Entity");
}
} Listing 7: After saving, send a feedback message to your support department
public bool SendFeedback()
Add a Save Method {
The data entered by the user on the User Feedback screen is bool ret = false;
going to be saved into the UserFeedback table you created. The // Save/Validate the data
Save() method, shown in Listing 6, uses the SampleDbContext if (Save()) {
class to attempt to add the data. If the data is correct, a new // TODO: Send the Feedback Message here
record is added to the table; if the data isn’t correct, a DbEn- // Display Informational Message
tityValidationException exception is thrown. Take the DbEnti- MessageBroker.Instance.SendMessage(
tyValidationException object and pass it to the ConvertVali- MessageBrokerMessages.
dationMessages() method you wrote in the SampleDbContext DISPLAY_TIMEOUT_INFO_MESSAGE_TITLE,
"Feedback Message Sent.");
class. Store the return result from this method into the Vali-
dationMessages property on the view model. Set the IsValida- ret = true;
tionVisible property to true to display the validation messages.
// Close the user feedback form
Close(false);
Add a SendFeedback Method }
You may want to send an email to a specific person when one of
the feedback messages is stored in the UserFeedback table. The return ret;
}
SendFeedback() method, Listing 7, is where you might per-

codemag.com A Design Pattern for Building WPF Business Applications: Part 3 19


Display a List of Users
Right mouse-click on the UserControls folder and add a User
Control named UserMaintenanceListControl to this project.
Remove the <Grid></Grid> element and add a <ListView>
control that looks like the following.

<ListView ItemsSource="{Binding Path=Users}">


<ListView.View>
<GridView>
GRID COLUMNS GO HERE
</GridView>
</ListView.View>
</ListView>

Within the <GridView></GridView> element, add several <Grid-


ViewColumn> elements (Listing 8) to display the columns
shown in Figure 2. There are some images included in the sam-
ple application that you can use to display the edit and delete
icons. Besides the normal <GridViewColumn> controls bound
to each individual property, notice the buttons in the CellTem-
plate controls for the edit and delete buttons. The Tag property
of each button contains {Binding}. When you do not specify
a path, the complete object is bound to that property. This
means a reference to the instance of the User class is bound
Figure 2: The sample application with a user list and detail user controls. to this property. When you click on these buttons, you can re-
trieve the user object for the row that was clicked upon. You’ll
use this user object in the next article for editing and deleting.
valid information into each field and click the Send Feedback After adding the code from Listing 8, be sure to create the
button again. Open the Sample database and check the User- EditButton_Click and the DeleteButton_Click event procedures.
Feedback table to see if the data was stored successfully.
Create View Model for the User List
Right mouse-click on the WPF.Sample.ViewModelLayer project
A Design Pattern for Master/ and add a new class named UserMaintenanceListViewModel.
Detail Screens cs. Add the code in Listing 9 to this new file you created. Add
The next screen you’re going to create is one to list, add, a property to this view model class that is an ObservableCol-
edit and delete users, as shown in Figure 2. To accomplish lection of User objects. The method LoadUsers() is used to
this, build two separate user controls; one for the list of us- fill the Users property from the Entity Framework DbContext
ers, and one that displays the detail for an individual user. class.
These two controls will be placed onto the user maintenance
control that you already built. As you build this screen, Modify the User Maintenance View Model
you’re going to create some generic classes to use for any Open the UserMaintenanceViewModel.cs file and change the
CRUD screen that you need to create. inheritance from ViewModelBase to UserMaintenanceList-

Listing 8: A ListView allows you to put buttons within any column you want
<GridViewColumn Header="Edit"> DisplayMemberBinding=
<GridViewColumn.CellTemplate> "{Binding Path=FirstName}" />
<DataTemplate> <GridViewColumn Header="Last Name"
<Button Width="Auto"
Style="{StaticResource toolbarButton}" DisplayMemberBinding=
Click="EditButton_Click" "{Binding Path=LastName}" />
Tag="{Binding}" <GridViewColumn Header="Email"
ToolTip="Edit Current User"> Width="Auto"
<Image Source="pack://application:,,,/ DisplayMemberBinding=
WPF.Common;component/ "{Binding Path=EmailAddress}" />
Images/Edit_Black.png" /> <GridViewColumn Header="Delete">
</Button> <GridViewColumn.CellTemplate>
</DataTemplate> <DataTemplate>
</GridViewColumn.CellTemplate> <Button
</GridViewColumn> Style="{StaticResource toolbarButton}"
<GridViewColumn Header="User ID" Click="DeleteButton_Click"
Width="Auto" Tag="{Binding}"
DisplayMemberBinding= ToolTip="Delete Current User">
"{Binding Path=UserId}" /> <Image Source="pack://application:,,,/
<GridViewColumn Header="User Name" WPF.Common;component/
Width="Auto" Images/Trash_Black.png" />
DisplayMemberBinding= </Button>
"{Binding Path=UserName}" /> </DataTemplate>
<GridViewColumn Header="First Name" </GridViewColumn.CellTemplate>
Width="Auto" </GridViewColumn>

20 A Design Pattern for Building WPF Business Applications: Part 3 codemag.com


Listing 9: Load users into an ObservableCollection so any bound control gets notification of changes
using System; }
using System.Collections.ObjectModel;
using Common.Library; public virtual void LoadUsers()
using WPF.Sample.DataLayer; {
SampleDbContext db = null;
namespace WPF.Sample.ViewModelLayer
{ try {
public class UserMaintenanceListViewModel db = new SampleDbContext();
: ViewModelBase
{ Users = new
private ObservableCollection<User> _Users = ObservableCollection<User>(db.Users);
new ObservableCollection<User>(); }
catch (Exception ex) {
public ObservableCollection<User> Users System.Diagnostics.Debug.WriteLine(
{ ex.ToString());
get { return _Users; } }
set { }
_Users = value; }
RaisePropertyChanged("Users"); }
}

ViewModel. Because the UserMaintenanceListViewModel in- check to ensure that there were no errors when loading us- Getting the Sample Code
herits from the ViewModelBase class, you only need to inherit ers. Also, check that you’ve added some users to the User
from the UserMaintenanceListViewModel class to get all its table in your SQL Server table. You can download the sample
functionality as well as that of the ViewModelBase class. code for this article by visiting
www.CODEMag.com under
public class UserMaintenanceViewModel Display User Detail the issue and article, or by
: UserMaintenanceListViewModel In Figure 2, you saw that the bottom of the screen contains visiting resources.fairwaytech.
com/downloads. Select
the detail for a single user. When you click on a row in the
“Fairway/PDSA Articles” from
Modify the User Maintenance User Control ListView control, you want to display the currently selected
the Category drop-down.
Open the UserMaintenanceControl.xaml file and in the at- user within the details area. Add a new user control named
Then select “ A Design Pattern
tributes of the <UserControl>, add a Loaded event. UserMaintenanceDetailControl.xaml within the UserControls for Building WPF Business
folder of the project. Modify the <Grid> element so that it Applications - Part 3” from
Loaded="UserControl_Loaded" has two columns and six rows, as shown in the code below. the Item drop-down.

Build the solution to ensure that everything compiles cor- <Grid>


rectly. Remove the <StackPanel> from the UserMainte- <Grid.ColumnDefinitions>
nanceControl.xaml. Open the Toolbox and locate the User- <ColumnDefinition Width="Auto" />
MaintenanceListControl you just created and drag and drop <ColumnDefinition Width="*" />
that control within the <Border>. After dragging the list </Grid.ColumnDefinitions>
control onto the maintenance control, add the DataContext <Grid.RowDefinitions>
for the UserMaintenanceListControl to reference the view <RowDefinition Height="Auto" />
model object of the UserMaintenanceControl, as shown in <RowDefinition Height="Auto" />
the following code snippet. <RowDefinition Height="Auto" />
<RowDefinition Height="Auto" />
<Border Style="{StaticResource screenBorder}"> <RowDefinition Height="Auto" />
<UserControls:UserMaintenanceListControl <RowDefinition />
DataContext="{StaticResource viewModel}" /> </Grid.RowDefinitions>
</Border> </Grid>

Open the UserMaintenanceControls.xaml.cs file and locate the After the closing </Grid.RowDefinitions> element add the
UserControl_Loaded() event procedure you just created. Call label and text box controls shown in Listing 10. In the view
the LoadUsers() method you just added. This method is respon- model you’re going to create in the next section, an Entity
sible for loading the users, and because the Users collection is property is created of the type User. You can see that the
bound to the ListView control, and the user control is bound to path to the bindings on each text box control is bound to
the view model on which that Users collection is located, this the Entity property followed by the name of a property in
causes the users to be displayed within the ListView. the User class.

private void UserControl_Loaded(object sender, After the labels and text box controls, add a stack panel (List-
System.Windows.RoutedEventArgs e) ing 11) for the Undo and Save buttons. Use Image and Text-
{ Block controls within each Button control to present an image
_viewModel.LoadUsers(); and text to the user for the Save and Undo functionality.
}
Create a User Detail View Model
Try It Out In the UserMaintenanceDetailControl user control, you see
Run the application and click on the Users menu item to that you’re binding to the properties of an Entity object.
see a list of users appear. If you don’t see any users appear, This Entity object is going to be in a view model for the

codemag.com A Design Pattern for Building WPF Business Applications: Part 3 21


Listing 10: Create the labels and text boxes for the user detail {
}
<Label Grid.Row="0"
}
Content="User Name" />
<TextBox Grid.Row="0"
Grid.Column="1" Add a property named Entity that is of the type User. This
Text="{Binding Path=Entity.UserName}" /> is the Entity property that you bind to the controls on the
<Label Grid.Row="1" UserMaintenanceDetail user control. Be sure to add a using
Content="First Name" />
<TextBox Grid.Row="1" statement for the WPF.Sample.DataLayer namespace at the
Grid.Column="1" top of this file.
Text="{Binding Path=Entity.FirstName}" />
<Label Grid.Row="2" private User _Entity = new User();
Content="Last Name" />
<TextBox Grid.Column="1"
Grid.Row="2" public User Entity
Text="{Binding Path=Entity.LastName}" /> {
<Label Grid.Row="3" get { return _Entity; }
Content="Email Address" />
<TextBox Grid.Column="1" set {
Grid.Row="3" _Entity = value;
Text="{Binding Path=Entity.EmailAddress}" /> RaisePropertyChanged("Entity");
}
}

Listing 11: Bind up all your text boxes so the user can input all their data Override the LoadUsers Method
<StackPanel Grid.Column="1" After loading the list of users, it would be nice to set the
Grid.Row="4" Entity property to the first item in the list. This causes the
Orientation="Horizontal">
<Button IsCancel="True"
binding on the user detail control to display the values for
Style="{StaticResource toolbarButton}"> the user in the bound text box controls. Override the Load
<StackPanel Orientation="Horizontal" Users() method, call the base.LoadUsers() method, and then
Style="{StaticResource check to ensure that the Users collection has some users.
toolbarButtonStackPanel}"> Set the Entity property to the first user in the Users collec-
<Image Source="pack://application:,,,/
WPF.Common;component/ tion. Setting this property causes the RaisePropertyChanged
Images/Undo_Black.png" event to be fired. This, in turn, causes the UI to redisplay
Style="{StaticResource the new values on the detail screen.
toolbarImage}" />
<TextBlock Text="Undo" />
</StackPanel> public override void LoadUsers()
</Button> {
<Button IsDefault="True" // Load all users
Style="{StaticResource toolbarButton}"> base.LoadUsers();
<StackPanel Orientation="Horizontal"
Style="{StaticResource
toolbarButtonStackPanel}"> // Set default user
<Image Source="pack://application:,,,/ if (Users.Count > 0) {
WPF.Common;component/ Entity = Users[0];
Images/Save_Black.png"
}
Style="{StaticResource
toolbarImage}" /> }
<TextBlock Text="Save" />
</StackPanel> Modify the User Maintenance View Model
</Button> Open the UserMaintenanceViewModel.cs file and change
</StackPanel>
the inheritance from UserMaintenanceListViewModel to
UserMaintenanceDetailViewModel. You now have separate
view models for each of the three user controls you’ve built.
details control. Right mouse-click on the ViewModels folder Because each view model inherits from the other, from the
and add a new class named UserMaintenanceDetailView- UserMaintenanceViewModel, you get all of the functionality
Model.cs. from the detail and list view models.

After creating this class, inherit from the UserMainte- public class UserMaintenanceViewModel :
nanceListViewModel from the previous article. This provides UserMaintenanceDetailViewModel
you with all the functionality of the UserMaintenanceList- {
ViewModel class, plus anything you add to the UserMainte- ...
nanceDetailViewModel class. Make the new view model file }
look like the following.
Modify the User List Control
using WPF.Sample.DataLayer; Open the UserMaintenanceListControl.xaml file and add
the SelectedItem attribute to the <ListView> control. This
namespace WPF.Sample.ViewModelLayer binds the SelectedItem property to the Entity property in
{ the UserMaintenanceDetailViewModel class. When the user
public class UserMaintenanceDetailViewModel : clicks on a new row in the ListView control, this property
UserMaintenanceListViewModel updates the Entity property. When this property is updated,

22 A Design Pattern for Building WPF Business Applications: Part 3 codemag.com


Listing 12: Build a toolbar using buttons and images
<ToolBar Grid.Row="0"> Images/Trash_Black.png"
<Button Style="{StaticResource toolbarButton}" Style="{StaticResource
ToolTip="Add New User"> toolbarImage}" />
<Image Source="pack://application:,,,/ </Button>
WPF.Common;component/ <Separator />
Images/Plus_Black.png" <Button Style="{StaticResource toolbarButton}"
Style="{StaticResource ToolTip="Undo Changes">
toolbarImage}" /> <Image Source="pack://application:,,,/
</Button> WPF.Common;component/
<Separator /> Images/Undo_Black.png"
<Button Style="{StaticResource toolbarButton}" Style="{StaticResource toolbarImage}" />
ToolTip="Edit Current User"> </Button>
<Image Source="pack://application:,,,/ <Button Style="{StaticResource toolbarButton}"
WPF.Common;component/ ToolTip="Save Changes">
Images/Edit_Black.png" <Image Source="pack://application:,,,/
Style="{StaticResource WPF.Common;component/
toolbarImage}" /> Images/Save_Black.png"
</Button> Style="{StaticResource toolbarImage}" />
<Button Style="{StaticResource toolbarButton}" </Button>
ToolTip="Delete Current User"> </ToolBar>
<Image Source="pack://application:,,,/
WPF.Common;component/

the RaisePropertyChanged event fires, which then forces the Try It Out SPONSORED SIDEBAR:
UI to update to the new values in the new User object. Run the application and click on the Users menu item. The
user list and detail screen should now display and show the Get .NET Core Help
<ListView ItemsSource="{Binding Path=Users}" first user’s detail in the detail area. Click on other users in for Free
SelectedItem="{Binding Path=Entity}"> the list view control and you should see that the user detail
area is updated with the new user information for each click Looking to create a new
Aggregate All Controls and the Toolbar you perform. or convert an existing
application to .NET Core
The UserMaintenanceControl.xaml has three rows, as shown
or ASP.NET Core? Get
in Figure 2. The first row contains a toolbar, the second row Create Toolbar started with a FREE hour-
the user list control, and the third row the user detail con- Just above the user list control, add a <Toolbar> control long CODE Consulting
trol. Open the UserMaintenanceControl.xaml file and add a into which you add some buttons, as shown in Listing 12. session to make sure you
<Grid> control within the <Border> and move the user list The toolbar doesn’t work yet, but you’ll add functionality to get started on the right
control within the <Grid>. Add a DataContext to this grid to it in the next article. foot. Our consultants have
bind to the viewModel resource. Set the Grid.Row attribute been working with and
to 1 so it appears in the second row of this grid. Set the Try It Out contributing to the .NET
Name of the control to listControl in case you wish to access Run the application and click on the Users menu item. No- Core and ASP.NET Core
any methods on this control. tice that the toolbar has been added. teams since the early pre-
release builds. Leverage
<Grid DataContext="{StaticResource viewModel}"> our team’s experience
<Grid.RowDefinitions> Summary and proven track record
<RowDefinition Height="Auto" /> In this article, you built the User Feedback Screen to al- to make sure that your
<RowDefinition Height="Auto" /> low a user to submit feedback about your application. While next project is a success.
<RowDefinition Height="Auto" /> building this screen, you learned to work with the validation For more information,
</Grid.RowDefinitions> errors returned from the Entity Framework. You also learned visit www.codemag.com/
<UserControls:UserMaintenanceListControl to use control aggregation to build a screen from different consulting or email us at
Grid.Row="1" user controls. Building an application this way allows you info@codemag.com.
x:Name="listControl" to test screen functionality separately. Using inheritance of
DataContext="{StaticResource viewModel}" /> the view models brings all the functionality for each of the
</Grid> user controls together so one user control can control all of
the others. In the next article, you’ll learn to enable and
Drag the User Detail Control onto the User disable each of the buttons based on what “state” you are
Maintenance Control in. You also learn to build the add, edit and delete function-
Open the Toolbox and drag the UserMaintenanceDetailCon- ality of the user screen.
trol below the user list control. Modify this new control by
adding the attribute Grid.Row=”2” to place this control in  Paul D. Sheriff
the second row of the control. Add the DataContext attribute 
to bind to the view model object on the user maintenance
user control. Set the Name attribute to detailControl in case
you wish to access any public methods on this control.

<local:UserMaintenanceDetailControl
Grid.Row="2"
x:Name="detailControl"
DataContext="{StaticResource viewModel}" />

codemag.com A Design Pattern for Building WPF Business Applications: Part 3 23


ONLINE QUICK ID 1909041

Responsible Package Management


in Visual Studio
Almost nine years ago, a new open source project named NuGet (www.NuGet.org) made its debut and two years after that debut,
NuGet was and continues to be shipped with Microsoft Visual Studio. NuGet is one of several package managers, like Node Package
Manager (NPM) for JavaScript and Maven for Java. Package Managers simplify and automate library consumption. For example,

if you need a library to implement JavaScript Object Notation want to watch my Introduction to NuGet Course: https://
(JSON) capabilities in your .NET application, it takes a few clicks www.pluralsight.com/courses/NuGet.
of the mouse and just like that, your application has powerful
capabilities that you didn’t have to write, free of charge.
The concepts presented herein do not require an
Once upon a time, developers built and maintained their extensive NuGet understanding. The intended audience
own libraries. If you needed a library, chances were, you includes experienced developers as well as directors
asked fellow developers in online communities hosted in and managers tasked with implementing a company’s
CompuServe in the giving spirit that was incident to such security and risk mitigation policies.
John V. Petersen communities, and chances were good that you could get a
johnvpetersen@gmail.com code library to meet your needs or, at the very least, you
could get guidance on how to build it.
linkedin.com/in/johnvpetersen
Package Managers and
Based near Philadelphia,
Pennsylvania, John is an
Package Sources
Before delving into the basic package manager concepts in
attorney, information
technology developer,
No production application or .NET/Visual Studio with NuGet, let’s get some context on
consultant, and author. build process should ever take package managers and packages in general. The following
are the core definitions you need to understand:
a direct dependency on any
public package source. • Package: An archive file (i.e., a zip or tar file) that con-
tains code artifacts and additional metadata used by a
package manager that, in turn, is used by a development
environment to add a package’s contents to a project.
Today, Open Source Software (OSS) has created an unprec- • Package Manager: A tool that an application develop-
edented availability to code and package management sys- ment environment (i.e., Visual Studio, Eclipse, etc.) uses
tems that make absorbing that code into your applications to gain access to packages contained in a package source.
a nearly friction-free process. That progress has ushered in Common package managers are NuGet, Maven, and Node
not only numerous benefits, but new risks and problems Package Manager (NPM). Not only does a package man-
as well. One recent example is the November 2018 Event ager manage access to a specific package, it also manages
Stream incident involving NPM (https://blog.npmjs.org/ the access to other packages that the downloaded pack-
post/180565383195/details-about-the-event-stream-inci- age depends upon (dependency management).
dent). This article addresses how to responsibly leverage • Package Source: A collection of packages that, for each
NuGet in Visual Studio in a way that mitigates risk. package, contains metadata about that package. Such
metadata includes the current version number, release
history, links to the source code repository (i.e., GitHub),
If you work for a public company governed by SOX or documentation, licensing information. Common package
are subject to the Health Insurance Portability and sources include NuGet.org, MyGet, and npmjs.com.
Accounting Act (HIPAA) or Payment Card In dustry (PCI)
regulations, if your applications directly rely on a public
NuGet source, there’s more than a fair chance that your
company may be in violation of the aforementioned Companies should build and
standards despite the lack of any adverse event.
manage their own packages and
the dependencies thereof,
In Case You’re Not Familiar and create and use their own
with NuGet Package source feeds.
If you’re not familiar with NuGet, what it is, and gener-
ally how it works, for additional context, you may want to
consult the documentation: https://docs.microsoft.com/
en-us/NuGet/what-is-NuGet. If you want the comprehen- The relationship among these three (NuGet.org, MyGet,, and
sive documentation PDF, you can download it here: http:// npmjs.com) is simple: Application development environ-
bit.ly/NuGetPDF. If you’re a Pluralsight subscriber, you may ments use package managers to connect to package sources

24 Responsible Package Management in Visual Studio codemag.com


and obtain packages to be used in an application develop- Companies should build and manage their own packages
ment project. and the dependencies thereon and create and use their own
package source feeds. If you leverage a package from a pub-
What’s the Risk? lic source, you should open the package and evaluate its
Of the three elements in the bulleted list above, risk arises contents, and add that package to your own source feed or
from two: Packages and Package Sources. Package sources like add the contents to your own package.
npmjs.com and NuGet.org are open environments to the ex-
tent that anybody can get an account and upload a package for Doesn’t Package Signing Mitigate the Risk?
others to download. For that reason alone, such open pack- In a word, yes, but it’s a qualified yes. Signing mitigates
age sources are inherently untrustworthy. Does that mean some risk, but not all risk. Signing wouldn’t have pre-
you should avoid such open sources? Of course not. What it vented the Event Stream Incident. The only thing package
does mean is that when taking packages from such sources, signing does is validate the package author/contributor.
you should perform the necessary due diligence to verify that Indeed, in most environments, you can limit which pack-
package’s contents. If you can’t determine a package’s prov- ages you can take to certain authors. If you have the pub-
enance and its contents with certainty, you’re exposing your lic key, then only those packages signed with the author’s
firm to risk that could be otherwise mitigated. A real-world certificate can be taken. However, that doesn’t mean you
example of risk exposure and the consequences thereof was just take any package from that author. What if the author’s
the Event Stream incident discovered in November 2018. That certificate was compromised? What if the author made an
incident involved malicious code in a package that harvested innocent mistake that ends up with your company sustain-
account information from accounts having BitCoin balances of ing some injury?
a certain level. The register reported (https://www.theregis- Committee of Sponsoring
ter.co.uk/2018/11/26/npm_repo_bitcoin_stealer/) that the Now that you have a background on packages, package Organizations of
code was part of a popular NPM library that on average, was managers, and package sources, and the associated risks, the Treadway
downloaded two million times per week. let’s apply that knowledge to NuGet. Commission (COSO)
http://www.coso.org
NuGet at a Glance: If you work for a publicly
If you can’t determine a package’s Creating Your Own NuGet Source traded corporation, your
As previously stated, this article is not a comprehensive company, at least on paper,
provenance and its contents with how-to on NuGet. For that, consult the materials introduced employs the COSO Enterprise
certainty, you’re exposing your at the beginning of this article. Just like packages, package Risk Management Framework.
managers, and package sources in general, NuGet follows Pick any 10-K (annual report)
firm to risk that could be otherwise the same approach. In Visual Studio, there is the NuGet and there will be a section
mitigated. Package Manager, illustrated in Figure 1. titled: MANAGEMENT’S
REPORT ON INTERNAL
CONTROL OVER FINANCIAL
REPORTING. In that section,
there is likely to be a mention
On one hand, open package sources make code easily avail- If you leverage a Package from a of COSO. If your company,
able. On the other hand, these open package sources DO
NOT and feasibly, CAN’T police submissions for malicious
public source, you should open on one hand integrates NuGet
Packages from public feeds
content. Who should be policing packages? The answer is the package, evaluate its contents, into its financial applications
simple: YOU! If you bring a package into your organiza- and add that package to your own without any vetting and,
tion, it’s your responsibility to verify not only the package’s on the other hand, doesn’t
contents, but the contents of every other package that the source feed or add the contents disclose ineffective internal
downloaded package depends upon. to your own package. controls in its 10-K, it may
be reasonable to conclude
Managing dependencies is another nice feature that a pack- that a required disclosure is
age manager provides. If you’re thinking that bringing a missing and your company’s
malicious package into your organization is like unleashing Also illustrated in Figure 1 is the package source. Most 10-K may be in violation of
a virulent virus, you’re getting the point. likely, your active package source is NuGet.org. In my case, SEC regulations.
it’s something labeled Local Package Source. Figure 2 il-
The fact is, no production application or build process should lustrates what that is:
ever take a direct dependency on any public package source.
Setting aside malicious actors, there are many innocuous rea- As you can see, the Local NuGet Source is just a directory on
sons to not trust public package sources: my development computer. This may be news: Setting up a
NuGet Source is as simple as creating a directory! Figure
• You’re leaving everything up to the package owner to man- 3 illustrates the NuGet Packages in the directory.
age versions and dependencies. What if the package owner
introduces a dependency that makes the package work, but
is completely incompatible with your application? The Anatomy of a NuGet Package
• What if the package owner uploads a new package ver- A NuGet Package is just a zip archive with a different exten-
sion that works, but nevertheless introduces a bug into sion (.nupkg). Figure 4 illustrates how to open the contents.
your application? If you set your build process up to
automatically upgrade your packages, you’ve now in- Figure 5 illustrates the package contents. Let’s examine
troduced what might be a costly bug that you’ll need to what are arguably the most popular and widely used NuGet
spend real money fixing. Packages: NewtonSoft.Json.

codemag.com Responsible Package Management in Visual Studio 25


Figure 1: One way of accessing the NuGet Package Manager is via the project or solution context menu.

Figure 2: Within the NuGet Package Manager, Package Source’s priority can be managed.

Figure 3: A NuGet Source can be as simple as a file directory.

26 Responsible Package Management in Visual Studio codemag.com


Referring to Figure 5, the items of interest are the lib folder
and the signature, license, and nuspec files:

• lib folder: This folder contains one or more subfolders


that use a naming convention for each supported .NET
version. You can learn more about targeting multiple
.NET versions here: https://docs.microsoft.com/en-us/
NuGet/create-packages/supporting-multiple-target-
frameworks.
• .signature.p7s file: As the name implies, this is the
signature file signed by the author’s certificate. You
can find more information on how to sign NuGet Pack-
ages here: https://docs.microsoft.com/en-us/NuGet/
create-packages/sign-a-package. You can learn now
to require that only signed packages be accessible and
to limit packages to certain authors here: https://
docs.microsoft.com/en-us/NuGet/consume-packag-
es/installing-signed-packages.
• License.md: This is a markdown file that contains the
license terms and conditions for your package. Typi-
cally, this consists of an open source license such as
the MIT, GNU, or Apache 2.0 licenses.
• Nuspec: The Nuspec file is the manifest file. This is an
XML file that is used to create the NuGet Package. This Figure 4: If you have an archive utility like 7-zip, you can simply right-click on a NuGet
file will be discussed in the next section. Package and open the archive.

Figure 5: A NuGet Package contains the meta data, license information, and libraries for each .NET version supported.

Creating Your Own NuGet Package


You now understand what Packages, Package Managers, and
Package Sources are and have a basic understanding of how
NuGet fits into that space. You also understand how to cre-
ate and reference your own package source with nothing
more than a directory of file share. All that’s left to get
started is to learn how to create your own NuGet Package.
To illustrate, I’m going to use the Immutable Class Library
I created and wrote about a few issues back (https://www.
codemag.com/Article/1905041/Immutability-in-C#).

There are several approaches you can use to create NuGet Figure 6: The NuGet Package structure contains a lib folder that contains a subfolder for each
Packages. I’m going to show you the method I consider the supported .NET version. The only other required file is the nuspec file (manifest).

codemag.com Responsible Package Management in Visual Studio 27


Figure 7: The nuspec file is the manifest that drives the package creation process. Most importantly, the nuspec file
references the package’s dependency.

Figure 8: The information contained in the nuspec file as displayed in the NuGet Package Manager.

Figure 9: NuGet.exe provides command line access to NuGet’s functions including package creation and download/
installation of NuGet Packages in your projects via an automated build server like Jenkins or Team City.

28 Responsible Package Management in Visual Studio codemag.com


Figure 10: The NuGet.exe pack command, using a nuspec file generates the NuGet Package.

easiest to use and understand. There are also many options own feed, why would you consider a paid service? These paid
you can apply that I won’t cover here. For comprehensive services have their own DR (Disaster Recovery) infrastruc-
coverage of all you can do with package creation, consult ture. If you host your own feed, you need to consider how
the documentation at NuGet.org. your server will be backed-up and replicated and how you
will recover in the event of a catastrophic event.
Step 1: Create a Package Directory Structure and
Add Your Binaries
Figure 6 illustrates the directory structure. Conclusion
Open source has made it easier than ever to add features to
I added an icon.png file that will be displayed in the Pack- your applications. Part of that ease is speed. Speed and ease
age Manager, as shown in Figure 1. The license text file con- mean less friction. Once upon a time, before open source as
tains the MIT License Language. Finally, there’s the nuspec we know it today, before the Internet, and before package
file, which is illustrated in Figure 7. management, there was implicit friction in the system, which
provided time to assess and evaluate. Developers of another
Step 2: Create Your Nuspec File generation, in my opinion, had a better understanding of
The nuspec file illustrated in Figure 7 is very basic. change management. They understood the discipline and
rigor required to mitigate risk. For all the benefits of today’s
For a complete nuspec reference, you can find that informa- technology and the speed and ease we get with it, it’s more
tion here: https://docs.microsoft.com/en-us/NuGet/refer- important than ever to employ risk mitigation techniques
ence/nuspec. The ID you choose for your package must be such as what is discussed in this article because if it’s easier
unique in the context of the source within which it’s hosted. for us to do good things, it’s easier for bad actors to use the
Accordingly, if you elect to make your NuGet Package avail- same technology. Robust security and risk mitigation aren’t
able in the NuGet.org feed, the ID must be unique in that free. If there’s one negative side-effect of free open source,
universe. Figure 8 illustrates how the package appears in it’s the expectation that things heretofore with a cost no
the NuGet Package Manager. longer have a cost. Consider that the next time a package
is introduced into your environment. If your organization is
Step 3: Create Your NuGet Package governed by SOX, HIPAA, FINRA, PCI, etc.—if you’re compli-
In order to create your NuGet Package from the command ant, you’re not letting that situation occur.
line, you need the NuGet Command Line Tools. Figure 9 il-
lustrates where you can download NuGet.exe.  John V. Petersen

Figure 10 illustrates how to generate your NuGet Package.

Step 4: Publish Your Package


Depending on the type of package source you are using, your
steps may be slightly different. For a file directory source,
the process is as simple as copying the file to the directory. If
you’re hosting your own NuGet Server (https://docs.micro-
soft.com/en-us/NuGet/hosting-packages/NuGet-server),
you will use one of the methods described here: https://
docs.microsof t.com/en-us/NuGet/NuGet-org/publish-
a-package.

Other Hosting Options


Instead of self-hosting or using the NuGet.org public feed,
you may instead elect to use a third-party service. For
NuGet, there are paid services such as myget (myget.org)
and chocolatey (chocolatey.org). If it’s so easy to host your

codemag.com Responsible Package Management in Visual Studio 29


ONLINE QUICK ID 1909051

Moving from jQuery to Vue


Most of the attention that JavaScript gets is all about creating large, monolithic Single Page Applications (SPAs). But the reality is
that a great percentage of websites still use much simpler jQuery and vanilla JavaScript. Without going all-in on moving everything
to a SPA, can you gain some of the benefits of using a framework to simplify your code and make it more reliable and testable?

Sure, you can. In many cases, moving to a SPA framework Although this is pretty easy to remedy, it’s a common prac-
means a complete re-thinking of your application. It’s a tice because it’s easy to think of an event handler as the
change in how you approach building applications. I whole- main place for code in jQuery.
heartedly recommend that you think about it this way if
you’re building new applications, as it can really change the Next up is changing the UI in jQuery:
way you approach Web development, but…
$('#ghapidata').html(`
In many cases, it‘s beneficial to ramp up to these technolo- <div id="loader">
gies. Tearing down your jQuery empire and adding some- <img src="https://i.imgur.com/UqLN6nl.gif"
Shawn Wildermuth thing like Angular, Vue, or React is a big leap. That’s one of alt="loading...">
shawn@wildermuth.com the reasons I love how Vue works. </div>`);
wildermuth.com
wilderminds.com
helloworldfilm.com In this article, my goal is to give you, the jQuery user, a It starts up immediately with code that builds up markup in
twitter.com/shawnwildermuth taste of the different approach that Vue takes and how this code. This mixes the metaphors of UI and logic into a single
can improve your code, your markup, and your ability to file. Getting the markup correct with in-line text is notori-
Shawn Wildermuth has build apps quickly. Yeah, quickly. ously fragile because there is no real syntax checking.
been tinkering with com-
puters and software since
Next, the code using jQuery to read a value from an input:
he got a Vic-20 back in the What’s Wrong with jQuery?
early ’80s. As a Microsoft
MVP since 2003, he’s also Nothing’s wrong with jQuery. Really, nothing. It’s been var username = $('#ghusername').val();
involved with Microsoft pivotal to creating most of the great websites you know.
as an ASP.NET Insider and jQuery was responsible for making cross-browser/cross-OS Although this is straightforward, it requires you to use an
ClientDev Insider. He’s websites work. It’s great. ID on every input that you need to get data out of. It also
the author of over twenty means a query over the entire DOM which can be slow, even
Pluralsight courses, written But in many ways, it’s getting long in the tooth. Let’s take at the fast speeds of jQuery. But I suspect if you’re reading
eight books, an interna- a quick example from Jake Rocheleau’s blog post here: this article, you already know about the issues with jQuery.
tional conference speaker, https://speckyboy.com/building-simple-reddit-api-we- How can you make it better with Vue?
and one of the Wilder bapp-using-jquery/. He created a small example that uses
Minds. You can reach jQuery to show how to call the GitHub API. You can see it in
him at his blog at Figure 1. Using Vue in Place of jQuery
http://wildermuth.com. I think the best way to describe the way that Vue works is to
He’s also making his first, The code is simple but shows a lot of the benefits and draw- convert this simple jQuery app. You can see in Listing 1 the
feature-length documentary backs of jQuery. Let’s break it down. You can see the JavaScript complete JavaScript of the project and Listing 2 contains the
about software developers in Listing 1. Most of the code is inside one large event han- complete Markup. Let’s rebuild this piece by piece so you can
today called “Hello World: dler. see how Vue can be used to do this easier and with less code.
The Film.” You can see
more about it at
$('#ghsubmitbtn').on('click', function(e){ Creating the Vue Object
http://helloworldfilm.com.
e.preventDefault(); One of the things I really like about Vue is that instead of
... requiring you buy into a big ecosystem, you can just drop

Figure 1: Simple jQuery Page

30 Moving from jQuery to Vue codemag.com


Listing 1: jQuery Version of the App
$(function () { height="80"
$('#ghsubmitbtn').on('click', function (e) { alt="${username}">
e.preventDefault(); </a>
$('#ghapidata').html(`<div id="loader"> </div>
<img src="https://i.imgur.com/UqLN6nl.gif" alt="loading..."> <p>
</div>`); Followers: ${followersnum} -
Following: ${followingnum}
var username = $('#ghusername').val(); <br>Repos: ${reposnum}</p></div>
var requri = 'https://api.github.com/users/' + username; <div class="repolist clearfix">`;
var repouri = 'https://api.github.com/users/'
+ username + '/repos'; var repositories;
$.getJSON(repouri)
$.getJSON(requri) .done((json) => {
.done((json) => { repositories = json;
if (json.message == "Not Found" || username == '') { outputPageContent();
$('#ghapidata').html("<h2>No User Info Found</h2>"); });
}
function outputPageContent() {
else { if (repositories.length == 0) {
// else we have a user and we display their info outhtml += '<p>No repos!</p></div>';
var fullname = json.name; } else {
var username = json.login; outhtml += `<h4>Repos List:</h4> <div>`;
var aviurl = json.avatar_url; $.each(repositories, function (index) {
var profileurl = json.html_url; outhtml += `<div class='d-inline'>
var followersnum = json.followers; <a href="${repositories[index].html_url}"
var followingnum = json.following; class="btn btn-sm btn-info m-1"
var reposnum = json.public_repos; target="_blank">
${repositories[index].name}
if (fullname == undefined) { fullname = username; } </a>
</div>`;
var outhtml = `<h2>${fullname} });
<span class="smallname"> outhtml += '</div></div>';
(@<a href="${profileurl}">${username}</a>) }
</span> $('#ghapidata').html(outhtml);
</h2> } // end outputPageContent()
<div class="ghcontent"> } // end else statement
<div class="float-left img-thumbnail m-1"> }); // end requestJSON Ajax call
<a href="${profileurl}" target="_blank"> }); // end click event handler
<img src="${aviurl}"
width="80" });

Figure 2: Markup of Original jQuery App


<body> </div>
<div id="w" class="container">
<h1>Simple Github API Webapp</h1> <div class="col-6">
<p>Enter a single Github username below and <button class="btn btn-success"
click the button to display profile info id="ghsubmitbtn">Pull User
via JSON.</p> Data</button>
</div>
<div class="row">
<div class="col-6"> <div id="ghapidata" class="clearfix">
<input type="text" class="form-control" </div>
id="ghusername" </div>
placeholder="Github username..." </div>
autofocus="autofocus"> </body>

a JavaScript file and start working with it. To get started, var app = new Vue({ Important Links
you’ll just use a link to a development version of the library: el: "#w"
Vue: vuejs.org
});
<!-- development version, Example:
includes helpful console warnings --> This object takes a JavaScript object to specify options. The first shawnw.me/Vue4QueryCode
<script part that’s important is the el property. That represents a selec-
src="//cdn.jsdelivr.net/npm/vue/dist/vue.js"> tor for the parent element in the HTML that you’re telling Vue to Original Post:
</script> take over for. This is a key difference between jQuery and Vue. shawnw.me/2XSlNzM
The magic of jQuery is to be able to search through the DOM and
In fact, in this example, I left the jQuery in the project as I’ll find the elements you’re interested in. Vue takes responsibility
get back to using it a little later. Yeah, really. for a section of the DOM (or the entire DOM). So, in this case,
you want set up the code to take over the element called w:
The basis of how Vue works is a Vue object. I just created a
new JavaScript file and started out with just an instance of <body>
the Vue object: <div id="w" class="container">

codemag.com Moving from jQuery to Vue 31


<h1>Simple Github API Webapp</h1> }
<p> }
Enter a single Github username below and });
click the button to display profile info
via JSON. The magic starts to happen when you can just start writ-
</p> ing code in the onSubmit without having to interrogate
... the DOM. You can do that by just calling the this property
inside the function. Vue takes the methods and data ele-
The first thing you’ll want to do is to be able to read the ments and attaches them to the this property. For example,
input for the user name. In jQuery, you accomplish this by: you can just access the userName inside the onSubmit
method:
var username = $('#ghusername').val();
onSubmit: function () {
But in Vue, the approach is different. You want to expose a if (this.userName) {
piece of data from the Vue object, and that be changed as console.log(`User: ${this.userName}`);
the user interacts with it. For example, let’s add the data }
section to the Vue object: }

var app = new Vue({ The key differentiator for Vue and jQuery is that instead of try-
el: "#w", ing to modify the DOM, you’ll just keep your business logic in
Moving from jQuery to Vue data: { JavaScript and let the bindings in the HTML show the changes
userName: "" that happen to the data. Although you can replace jQuery with
Using jQuery changed Web
}, Vue, you’ll need to change your mindset about how to solve Web
development in fundamentals
ways. If you’re still using }); development problems. Let’s implement the calls to GitHub to
it as your main development see how you can close the loop using this new mindset.
tool, I think it’s time to look The data property is a list of the data that you want to share
beyond it. Vue is a great tool with the markup. In this case, you’ll start with just a single
that allows you to keep property to hold the name of the user. In the markup, you’ll
the simplicity of jQuery as use an attribute called v-model to have the input field tied The key differentiator for Vue and
“just a library on a page” to the userName: jQuery is that instead of trying to
without the difficulties
of trying to maintain large <input type="text" class="form-control" modify the DOM, you’ll just keep
jQuery code-bases. placeholder="Github username..." your business logic in JavaScript
autofocus="autofocus"
In this article, I show you how v-model="userName">
and let the bindings in the HTML
they’re comparable, although show the changes that happen
I only touch on the very tip of The v-model is used to create a two-way binding to the
the sword of what Vue can do. to the data.
userName. This means that if code changes the value, it
If you’re a jQuery developer,
will show up in the UI; and if the user changes the value,
this is a great way to
move away from query-
the property value is changed too. Let’s see how you’d work
based development to with this new value. Executing the API
reactive user interfaces. This simple page takes the name of the GitHub user and ex-
Handling the Click Event ecutes a network request. The jQuery version uses jQuery to
If you’re completely The trick to handling the click event is to use another at- execute the network requests. Vue purposely doesn’t attempt
new to Vue, I suggest that tribute: v-on. This attribute allows you to register for events to do the networking for you. You’re free to use any networking
you read my prior article on any DOM object. In this case, you want to handle the library you want. It could be jQuery (because you likely already
“A Vue to a Skill” (from click event so you’ll use v-on:click as the full name (click is know it), but there are also alternatives like axios and even a
March/April 2019 issue or the DOM event name). Vue-specific one called vue-resource. But because you’re con-
https://shawnw.me/AVueToASkill) verting code, let’s just keep the jQuery networking code.
for a brief introduction to Vue. <button class="btn btn-success"
v-on:click="onSubmit()"> The existing code uses jQuery’s getJSON to get the respons-
Pull User Data es from the API. Instead of constructing HTML like the old
</button> version does (see Figure 1), you’re going to just create a
new data property called user that will hold the object from
Inside the value for v-on, you can simply define what code the server:
to execute. In this example, you can just call a method on
the Vue object. So now you can add the onSubmit method data: {
on the Vue object like so: userName: "",
user: null
var app = new Vue({ },
el: "#w",
data: { Now that you have the data, you can simplify it by just set-
userName: "" ting the value to the user property:
},
methods: { $.getJSON(requri)
onSubmit: function () { .done((json) => {

32 Moving from jQuery to Vue codemag.com


... <a href="${repositories[index].html_url}"
this.user = json; // just bind class="btn btn-sm btn-info m-1"
// to the data target="_blank">
}) ${repositories[index].name}
... </a>
</div>`;
This lets you take the all of the jQuery code that constructs });
the HTML and just move it to markup:
By going through a collection of the repositories, the jQue-
<!-- User Info --> ry code constructs a set of divs that represent each of the
<div> repositories to show the user. The strategy with Vue is the
<h2>{{ user.name }}</h2> opposite of this. In Vue, you simply add the repo to the ex-
isting user object (because you need two calls to the GitHub
The double curly braces syntax here (often called mustache API to get both the repos and the user object) like so:
syntax) allows you to call a one-way binding from the data
element. In this case, you’ll just take the user’s name and $.getJSON(repouri)
show it in the h2 element. If you run the code looking like .done(repos => {
this, it will complain because when you first run the page Vue.set(this.user, "repos", repos)
(before you execute the API), it won’t be able to find the })
name property of the user. To get around this, you can use
another attribute that Vue supplies called v-if: Before you make the collection work, let’s talk about what’s SPONSORED SIDEBAR:
happening here. The call to Vue.set is something new. It’s
<!-- User Info --> actually uncommonly used, but you need it here. Before it Need FREE Project Help?
<div v-if="user"> makes sense, I need to talk about how Vue updates the UI CODE Can Help!
<h2>{{ user.name }}</h2> when you change the code.
Want free advice on a
The v-if attribute here tells Vue to not show anything in In Vue’s data object, you have properties that you want to new or existing Web
project? Need free advice
this div if the user is false (which in JavaScript means null, expose to the UI:
on migrating an existing
empty, or other case for false). This defers showing the en-
application from an aging
tire div until you set the user. The v-if attribute gives you data: {
legacy platform like VB
the power to control which parts of the UI to show until it’s userName: "",
or FoxPro to a modern
needed. You could show the user section, but it would be user: null cloud or Web application?
empty until you have a user. }, CODE Consulting experts
have experience in cloud,
You want to show more than just the user name, and you’d Vue takes this list of properties and replaces them at run- Web, desktop, mobile,
like to show a link over to the GitHub page too (as seen in time with a set of getters and setters (e.g., properties). It microservices, and DevOps
Figure 1). To do this, you can add that markup after the does this so that when you set a value, it can react to that and are a great resource
user.name: change (you know, that reactivity term that you’ve probably for your team! Contact us
heard bandied about). It does this under the hood. So that today to schedule your free
<!-- User Info --> when you change a value, it queues up that change so that hour of CODE consulting
<div v-if="user"> next time it updates the markup, it knows that your change call with our expert
<h2>{{ user.name }} needs to be shown. consultants (not a sales
<span class="smallname"> call!). For more information
(@<a v-bind:href="user.html_url" Great, but what is this Vue.set, then? Sometimes you write visit www.codemag.com/
target="_blank">{{ user.login }}</a>) code that breaks the idea of reactivity and I’m doing that consulting or email us at
</span> in this example. By adding a new property to user (e.g., info@codemag.com.
</h2> repos), Vue didn’t know about it initially. So, you can let
Vue know about it by using the Vue.set method. This takes
This introduces another attribute called v-bind. This attri- the reactive object to set a property to, the name of the
bute is used to bind to attributes. In this case, you’re using property on the reactive property, and the value to assign
it to bind to the href of the anchor tag. Note that because it. This way, you’re hinting to Vue that it needs to update
you’re using the v-bind attribute, you don’t need the curly the value when you change it. Not necessarily in most cases,
braces. Vue interprets what’s inside the quotes as an expres- but it’s good to know when you do need it.
sion against the Vue object.
With all that in place, you can then use a new Vue attribute
In this way, you’re using Vue to specify what to show inside called v-for to show the collection in the markup:
of the HTML, but it’s still valid HTML. This way, your HTML
tools will just work. <a v-for="repo in user.repos"
v-bind:key="repo.html_url"
Collection Binding v-bind:href="repo.html_url"
In jQuery, you’re used to creating collections of markup in class="btn btn-sm btn-info m-1">
order to create lists and grids. For example, from the jQuery {{ repo.name }}
version: </a>

$.each(repositories, function(index) { The v-for attribute instructs the element to create an an-
outhtml += `<div class='d-inline'> chor tag for every repository in the user.repo property and

codemag.com Moving from jQuery to Vue 33


Listing 3: The complete Vue implementation
var app = new Vue({
el: "#w", $.getJSON(requri)
data: { .done((json) => {
userName: "", // Ensure we have a name
user: null if (!json.name) json.name = json.login;
}, this.user = json; // just bind to the data
methods: {
onSubmit: function () { $.getJSON(repouri)
if (this.userName) { // user has put a user name in .done(repos => Vue.set(this.user, "repos", repos));
// URIs })
var requri = };
`https://api.github.com/users/${this.userName}`; }
var repouri = }
`https://api.github.com/users/${this.userName}/repos`; });

Listing 4: The Markup with the Vue implementation


<div id="w" class="container"> <a v-bind:href="user.login"
<h1>Simple Github API Webapp</h1> target="_blank">
<p>Enter a single Github username below and <img
click the button to display profile info v-bind:src="user.avatar_url"
via JSON. width="80" height="80"
</p> class="img-thumbnail float-left m-1"
v-bind:alt="user.login">
<div class="row"> </a>
<div class="col-6"> </div>
<input type="text" class="form-control" <p>Followers: {{ user.followers }} -
id="ghusername" Following:
placeholder="Github username..." {{ user.following }}<br>Repos:
autofocus="autofocus" {{ user.public_repos}}</p>
v-model="userName"> </div>
</div> <div class="clearfix">
<p><strong>Repos List:</strong></p>
<div class="col-6"> <p
<button class="btn btn-success" v-if="!user.repos || user.repos.length == 0">
v-on:click="onSubmit()"> No Repos</p>
Pull User Data <div>
</button> <a v-for="repo in user.repos"
</div> v-bind:key="repo.html_url"
<div class="clearfix"> v-bind:href="repo.html_url"
<!-- User Info --> class="btn btn-sm btn-info m-1"
<div v-if="user"> target="_blank">{{ repo.name }}</a>
<h2>{{ user.name }}<span </div>
class="smallname">(@<a </div>
v-bind:href="user.html_url" </div>
target="_blank">{{ user.login }}</a>)</span> </div>
</h2> </div>
<div class="ghcontent"> </div>
<div>

to create a local variable called repo to represent the indi- these days. It allows you to quickly get your job done and
vidual repository. keep the code from becoming a spaghetti mess of callbacks
and inline HTML. But it does require a learning curve that
When using v-for, you also need to specify a unique key can be a little steep at first. The switch of mindset from
for each record (which helps Vue remove items from the UI querying and manipulating the DOM to having the DOM re-
when you change the collection in the code). You do that act to changes in your code is a key change. But once you
by binding to a key attribute with v-bind. In this case, you get over that hump, I think you’ll see the benefit of using a
know that the URL is unique, so just use it as a key. You framework like Vue.
could use a primary key or other unique key. It doesn’t mat-
ter as long as it’s unique.  Shawn Wildermuth

The rest of the code is just simple binding like you’ve seen
before (setting the href and the contents of the anchor).
You can see the complete finished code in Listings 3 and 4.

Where Are We?


As a jQuery user, you may be used to being able to cre-
ate simple applications by just dropping jQuery and writing
some quick code. I would argue that Vue is a better choice

34 Moving from jQuery to Vue codemag.com


ONLINE QUICK ID 1909061

Introduction to GraphQL for .NET


Developers: Schema, Resolver, and
Query Language
GraphQL has been gaining wide adoption as a way of building and consuming Web APIs. GraphQL is a specification that defines
a type system, query language, and schema language for your Web API, and an execution algorithm for how a GraphQL service
(or engine) should validate and execute queries against the GraphQL schema. It’s upon this specification that the tools and

libraries for building GraphQL applications are built. In this as well as extra fields that are irrelevant to the client. An
article, I’ll introduce you to some GraphQL concepts with a example to consider is an endpoint /users/id which returns
focus on GraphQL schema, resolver, and the query language. a user’s data. It returns basic information, such as (in this
If you’d like to follow along, you need some basic under- example, an online school’s database will be used) name
standing of C# and ASP.NET Core. and department, as well as extra information, such as ad-
dress, billing information, or other pertinent information,
such as courses they’re enrolled in, purchasing history, etc.
Why Use GraphQL? For some clients or specific pages, this extra information
Peter Mbanugo GraphQL was developed to make reusing the same API into a can be irrelevant. A client may only need the name and
p.mbanugo@yahoo.com flexible and efficient process. GraphQL works for API clients some identifying information, like social security number or
www.pmbanugo.me with varying requirements without the server needing to the courses they’re enrolled in, making the extra data such
twitter.com/p_mbanugo change implementation as new clients get added or without as address and billing information irrelevant. This is where
the client needing to change how they use the API when over-fetching happens, affecting performance. It can also
Peter is a software develop-
er who codes in JavaScript new things get added. It solves many of the inefficiencies consume more of users’ Internet data plan.
and C#. He has experience that you may experience when working with a REST API.
working on the Microsoft Under-fetching happens when an API call doesn’t return
stack of technologies and Some of the reasons you should use GraphQL when building enough data, forcing the client to make additional calls
also building full-stack APIs are: to the server to retrieve the information it needs. If the
applications in JavaScript. API endpoint /users/id only returns data that includes the
He is a co-chair on NodeJS • GraphQL APIs have a strongly typed schema user’s name and one other bit of identifying data, clients
Nigeria, a Twilio Champion, • No more over- or under-fetching needing all of the user’s information (billing details, ad-
and a contributor to the • Analytics on API use and affected data dress, courses completed, purchasing history, etc.) will
Hoodie open source project. have to request each piece of that data with separate API
Let’s take a look. calls. This affects performance for these types of clients,
He is the maker of Hamoni especially if they’re on a slow connection.
Sync, a real-time state A Strongly Typed Schema as a Contract Between
synchronization as a service the Server and Client This problem isn’t encountered in GraphQL applications be-
platform. He currently works cause the client can request exactly the bits of data they
The GraphQL schema, which can be written using the
with Field Intelligence,
GraphQL Schema Definition Language (SDL), clearly defines need from the server. If the client requirement changes,
where he helps build a
what operations can be performed by the API and the types the server need not change its implementation but rather
Supply Chain Management
system that powers vaccine available. It’s this schema that the server’s validation en- the client is updated to reflect the new data requirement by
and immunization distribution gine uses to validate requests from clients to determine if adding the extra field(s) it needs when querying the server.
with the Nigerian government they can be executed. You will learn more about this and the declarative query lan-
as a major user. guage in GraphQL in the upcoming sections.
No More Over-or Under-Fetching
When he is not coding, GraphQL has a declarative way of requesting data using the Analytics on clients’ usage
he enjoys writing the tech- GraphQL query language syntax. This way, the client can request GraphQL uses resolver functions (which I’ll talk about lat-
nical articles that you can any shape of data they want, as long as those types and its fields er) to determine the data that the fields and types in the
find on his website or other are defined in the schema. This is analogous to REST APIs where schema returns. Because clients can choose which fields the
publications, such as on the endpoints return predefined and fixed data structures. server should return with the response, it’s possible to track
Pluralsight and Telerik. how those fields are used and evolve the API to deprecate
His current focus is on This declarative way of requesting data solves two common- fields that are no longer requested by clients.
Offline-First and GraphQL. ly encountered problems in RESTful APIs:

• Over-fetching Setting Up the Project


• Under-fetching You’ll be building a basic GraphQL API that returns data from
an in-memory collection. Although GraphQL is independent
Over-fetching happens when a client calls an endpoint to of the transport layer, you want this API to be accessed over
request data, and the API returns the data the client needs HTTP, so you’ll create an ASP.NET Core project.

36 Introduction to GraphQL for .NET Developers: Schema, Resolver, and Query Language codemag.com
Figure 1: The NuGet packages to install

Create a new ASP.NET Core project and install the dependen- using GraphQL.Types;
cies shown in Figure 1.
public class BookType : ObjectGraphType<Book>
The first package you installed is the GraphQL package for {
.NET. It provides classes that allow you to define a GraphQL public BookType()
schema and also a GraphQL engine to execute GraphQL que- {
ries. The second package provides an ASP.NET Core middle- Field(x => x.Id);
ware that exposes the GraphQL API over HTTP. The third Field(x => x.Title);
package is referred to as the GraphQL Playground, which Field(x => x.Pages, nullable: true);
works in a similar way to Postman for REST APIs. It gives Field(x => x.Chapters, nullable: true);
you an editor in the browser where you can write GraphQL }
queries against your server and see how it responds. It gives }
you IntelliSense and you can view the GraphQL schema
from it.
Listing 1: The type for the root query operation

The GraphQL Schema using GraphQL.Types;


using System.Collections.Generic;
The GraphQL schema is at the center of every GraphQL using System.Linq;
server. It defines the server’s API, allowing clients to know
public class RootQuery : ObjectGraphType
which operations can be performed by the server. The sche- {
ma is written using the GraphQL schema language (also public RootQuery()
called schema definition language, SDL). With it, you can {
define object types and fields to represent data that can Field<ListGraphType<BookType>>("books", resolve:
context => GetBooks());
be retrieved from the API as well as root types that define Field<BookType>("book",
the group of operations that the API allows. The root types arguments: new QueryArguments(
are the Query type, Mutation type, and Subscription type, new QueryArgument<IdGraphType> { Name = "id" }
which are the three types of operations that you can run ), resolve: context =>
on a GraphQL server. The query type is compulsory for any {
var id = context.GetArgument<int>("id");
GraphQL schema, and the other two are optional. Although return GetBooks().FirstOrDefault(x => x.Id == id);
you can define custom types in the schema, the GraphQL });
specification also defines a set of built-in scalar types. They }
are Int, Float, Boolean, String, and ID.
static List<Book> GetBooks()
{
There are two ways of building GraphQL server applications. var books = new List<Book>{
There’s the schema-first approach where the GraphQL sche- new Book {
ma is designed up front. The other approach is the code-first Id = 1,
Title = "Fullstack tutorial for GraphQL",
approach where the GraphQL is constructed programmati- Pages = 356
cally. The code-first approach is common when building a },
GraphQL server using a typed language like C#. You’re going new Book
to use the code-first approach here and later look at the {
Id = 2,
generated schema. Title = "Introductory tutorial to GraphQL",
Chapters = 10
Let’s get started with the schema. Create a new folder called },
GraphQL and add a new file Book.cs with the content in the new Book
following snippet: {
Id = 3,
Title="GraphQL Schema Design for the Enterprise",
public class Book Pages= 550,
{ Chapters= 25
public int Id { get; set; } }
};
public string Title { get; set; }
public int? Pages { get; set; } return books;
public int? Chapters { get; set; } }
t} }

Add another class BookType.cs and paste the content from


the next snippet into it.

codemag.com Introduction to GraphQL for .NET Developers: Schema, Resolver, and Query Language 37
Figure 2: The GraphQL Playground opened in the browser

The code in the last snippet represents a GraphQL object


ADVERTISERS INDEX type in the schema. It’ll have fields that will match the
properties in the Book class. You set the Pages and Chap-
ters fields to be nullable in the schema. If not set, by de-
Azure, Flutter, GraphQL, Vue, NuGet
Advertisers Index fault, the GraphQL .NET library sets them as non-nullable.

The application you’re building only allows querying for


SEP

AceThinker
OCT
2019
codemag.com - THE LEADING INDEPENDENT DEVELOPER MAGAZINE - US $ 8.95 Can $ 11.95

Design Patterns all the books and querying for a book based on its ID. The
for Distributed www.acethinker.com 7 book type is defined so go ahead and define the root query
Systems
CODE Consulting type. Add a new file RootQuery.cs in the GraphQL folder,
www.codemag.com/techhelp 35 then copy and paste the code from Listing 1 into it.

CODE Framework The RootQuery class will be used to generate the root
www.codemag.com/framework 67 operation query type in the schema. It has two fields,
CODE Magazine book and books. The books field returns a list of Book
Implementing GraphQL APIs www.codemag.com/magazine 41, 53 objects, and the book field returns a Book type based
VUE.js for jQuery Developers on the ID passed as an argument to the book query. The
Azure Machine Learning
CODE Staffing type for this argument is defined using the IdGraphType,
www.codemag.com/staffing 75 which translates to the built in ID scalar type in GraphQL.
Advertising Sales:
Tammy Ferguson DEVintersection Conference Every field in a GraphQL type can have zero or more argu-
832-717-4445 ext 026 ments.
tammy@codemag.com www.DEVintersection.com 2
dtSearch You’ll also notice that you’re passing in a function to the
www.dtSearch.com 13 Resolve parameter when declaring the fields. This function
Hibernating Rhinos Ltd. is called a Resolver function, and every field in GraphQL
www.ravendb.net 15 has a corresponding Resolver function used to determine
the data for that field. Remember that I mentioned that
JetBrains GraphQL has an execution algorithm? The implementation
www.jetbrains.com/resharper  76 of this execution algorithm is what transforms the query
LEAD Technologies from the client into actual results by moving through ev-
This listing is provided as a courtesy www.leadtools.com 5 ery field in the schema and executing their Resolver func-
to our readers and advertisers. tion to determine its result.
The publisher assumes no responsibi-
Tower48
lity for errors or omissions. www.tower48.com 71 The books resolver calls the GetBooks() static function to
return a list of Book objects. You’ll notice that it’s return-

38 Introduction to GraphQL for .NET Developers: Schema, Resolver, and Query Language codemag.com
ing a list of Book objects and not BookType, which is the using GraphQL;
type tied to the schema. GraphQL for .NET library takes care using GraphQL.Server;
of this conversion for you. using GraphQL.Server.Ui.Playground;

The Book resolver calls context.GetArgument with id as Go to the ConfigureServices method and add the code snip-
the name of the argument to retrieve. This argument is pet you see below to it.
then used to filter the list of books and return a matching
record. services.AddScoped<IDependencyResolver>
(s => new
The last step needed to finish the schema is to create a class FuncDependencyResolver(s.GetRequiredService));
that represents the schema and defines the operation al-
lowed by the API. Add a new file GraphSchema.cs with the services.AddScoped<GraphSchema>();
content in the following snippet: services.AddGraphQL()
.AddGraphTypes(ServiceLifetime.Scoped);
using GraphQL;
using GraphQL.Types; The code in that snippet configures the dependency injection
container so that when something requests a particular type
public class GraphSchema : Schema of FuncDependencyResolver from IDependencyResolver that it
{ should return. In the lambda, you call GetRequiredServices to
public GraphSchema(IDependencyResolver resolver) : hook it up with the internal dependency injection in ASP.NET.
base(resolver) Then you added the GraphQL schema to the dependency injec-
{ tion container and used the code services.AddGraphQL exten-
Query = resolver.Resolve<RootQuery>(); sion method to register all of the types that GraphQL.net uses,
} and also call AddGraphTypes to scan the assembly and register
} all graph types such as the RootQuery and BookType types.

In that bit of code, you created the schema that has the Let’s move on to the Configure method to add code that
Query property mapped to the RootQuery defined in List- sets up the GraphQL server and also the GraphQL playground
ing 1. It uses dependency injection to resolve this type. The that is used to test the GraphQL API. Add the code snippet
IDependencyResolver is an abstraction over whichever de- below to the Configure method in Startup.cs.
pendency injection container you use, which, in this case, is
the one provided by ASP.NET. app.UseGraphQL<GraphSchema>();
app.UseGraphQLPlayground
(new GraphQLPlaygroundOptions());
Configuring the GraphQL Middleware
Now that you have the GraphQL schema defined, you need
to configure the GraphQL middleware so it can respond to The GraphQL Query Language
GraphQL queries. You’ll do this inside the Startup.cs file. So far, you’ve defined the GraphQL schema and resolvers
Open that file and add the following using statements: and have also set up the GraphQL middleware for ASP.NET,

Figure 3: The query results on books

codemag.com Introduction to GraphQL for .NET Developers: Schema, Resolver, and Query Language 39
Figure 4: The query results on books and book(id: 3)

which runs the GraphQL server. You now need to start the What’s Next?
server and test the API. So far, I’ve covered some basics of GraphQL. You looked at
defining a schema using the code-first approach, writing
Start your ASP.NET application by pressing F5 or running resolver functions, and querying the GraphQL API. You cre-
the command dotnet run. This opens your browser with ated a server using the GraphQL package for .NET, the NuGet
the URL pointing to your application. Edit the URL and add package GraphQL.Server.Transports.AspNetCore, which
/ui/playground at the end of the URL in order to open the is the ASP.NET Core middleware for the GraphQL API, and
GraphQL playground, as you see in Figure 2. GraphQL.Server.Ui.Playground package that contains the
GraphQL playground. You used the GraphQL playground to
The GraphQL playground allows you to test the server op- test your API. I explained that in GraphQL, there are three
erations. If you’ve built REST APIs, think of it as a Postman operation types. In this article, you worked with the query
alternative for GraphQL. operation; in the next article, you’ll look at mutations and
accessing a database to store and retrieve data. You’ll up-
Now let’s ask the server to list all the books it has. You do date your schema so you can query for related data, e.g.,
this using the GraphQL query language, another concept of authors with their books, or books from a particular pub-
GraphQL that makes it easy for different devices to query lisher. Stay tuned!!
for data how they want, served from the same GraphQL API.
 Peter Mbanugo
Go to the GraphQL playground. Copy and run the query you see 
on the left side of the editor in Figure 3, then click the play
button to send the query. The result should match what you see
on the right-hand side of the playground, as shown in Figure 3.

You’ll notice that the query is structured similarly to the


schema language. The books field is one of the root fields
defined in the query type. Inside the curly braces, you have
the selection set on the books field. Because this field re-
turns a list of Book type, you specify the fields of the Book
type that you want to retrieve. You omitted the pages field;
therefore it isn’t returned by the query.

You can test the book(id) query to retrieve a book by its ID.
Look at Figure 4 and run the query you see there to retrieve
a book. In that query, you set the id argument to a value of
3, and it returned exactly what you need. You’ll notice that
I have two queries, books and book(id: 3). This is a valid
query. The GraphQL engine knows how to handle it.

40 Introduction to GraphQL for .NET Developers: Schema, Resolver, and Query Language codemag.com
ONLINE QUICK ID 1909071

Design Patterns for Distributed Systems


Containers and container orchestrators have fundamentally changed the way we look at distributed systems. In the past,
developers had to build these systems nearly from scratch, resulting in each architecture being unique and not repeatable. We
now have infrastructure and interface elements for designing and deploying services and applications on distributed systems

using reusable patterns for microservice architectures and focused on providing a single service. This reduced scope en-
containerized components. ables each service to be built and maintained by a single agile
team. Reduced team size also reduces the overhead associat-
Today’s world of always-on applications and APIs have avail- ed with keeping a team focused and moving in one direction.
ability and reliability requirements that would have been re-
quired of only a handful of mission-critical services around the Additionally, the introduction of formal APIs in between dif-
globe only a few decades ago. Likewise, the potential for rapid, ferent microservices decouples the teams from one another
viral growth of a service means that every application has to and provides a reliable contract among the different ser-
be built to scale nearly instantly, in response to user demand. vices. This formal contract reduces the need for tight syn-
Stefano Tempesta These constraints and requirements mean that almost every chronization among the teams because the team providing
stefano.tempesta@outlook.com application that’s built, whether it’s a consumer mobile app or the API understands the surface area that it needs to keep
www.blogchain.space a back-end payments application, would benefit from a distrib- stable, and the team consuming the API can rely on a stable
twitter.com/stefanotempesta uted system. But building distributed systems is challenging. service without worrying about its details. This decoupling
enables teams to independently manage their coding and
Stefano Tempesta is a
Microsoft RD (Regional Containerized building blocks are the basis for the develop- release schedules, which, in turn, improves each team’s
Director), triple MVP on ment of reusable components and patterns that dramati- ability to iterate and improve their function.
Azure, AI and Business cally simplify and make accessible the practices of building
Applications, and member reliable distributed systems. Reliability, scalability, and sep- Finally, the decoupling of microservices enables better scal-
of the Blockchain Council. aration of concerns dictate that real-world systems are built ing. Because each component has been broken out into its
A regular speaker at inter- out of many different components spread across multiple own service, it can be scaled independently. It’s rare for
national IT conferences, computers. In contrast to single-node patterns, multi-node each service within a larger application to grow at the same
including Microsoft Ignite distributed patterns are more loosely coupled. Although the rate or have the same way of scaling. Some systems are
and Tech Summit, Ste- patterns dictate communication between the components, stateless and can simply scale horizontally, whereas other
fano’s interests extend to this communication is based on network calls. Furthermore, systems maintain state and require sharding or other ap-
blockchain and AI-related many calls are issued in parallel, and systems coordinate via proaches to scale. By separating each service out, you can
technologies. He created loose synchronization rather than tight constraints. use the approach to scaling that suits it best. This isn’t pos-
Blogchain Space (blog- sible when all services are part of a single monolith.
chain.space), a blog about
blockchain technologies, Microservice Architecture
writes for MSDN Magazine Recently, the term microservices has become a buzzword for de- Azure Kubernetes Service
and MS Dynamics World, scribing multi-node distributed software architectures. Microser- Kubernetes (https://kubernetes.io/) is a rapidly evolving
and publishes Machine
vices describe a system built out of many different components platform that manages container-based applications and
Learning experiments on
running in different processes and communicating over defined their associated networking and storage components. The
the Azure AI Gallery.
APIs. Microservices stand in contrast to monolithic systems like focus is on the application workloads, not the underlying in-
that in Figure 1, which tend to place all of the functionality for a frastructure components. Kubernetes provides a declarative
service within a single, tightly coordinated application. approach to deployments, backed by a robust set of APIs for
management operations.
Figure 2, instead, illustrates how individual functions can
be separated on isolated services and interact with each You can build and run modern, portable, microservices-
other through a programming interface (API). based applications that benefit from Kubernetes orches-
trating and managing the availability of those application
There are numerous benefits to the microservices approach, components. As an open platform, Kubernetes allows you
most of which are centered around reliability and agility. Mi- to build your applications with your preferred programming
croservices break down an application into small pieces, each language, OS, libraries, or messaging bus. Existing continu-
ous integration and continuous delivery (CI/CD) tools can
integrate with Kubernetes to schedule and deploy releases.

Azure Kubernetes Service (AKS: https://azure.microsoft.


com/en-us/services/kubernetes-service/) provides a man-
aged Kubernetes service that reduces the complexity for
deployment and core management tasks, including coordi-
nating upgrades. The AKS cluster masters are managed by
the Azure platform, and you only pay for the AKS nodes that
run your applications.

Figure 3 shows some basic components of a Kubernetes


Figure 1: A monolithic application with all its functions in a single application cluster architecture that’s useful to get familiar with, be-

42 Design Patterns for Distributed Systems codemag.com


fore describing some typical design patterns for distributed
systems and how to implement them in AKS. First of all, a
Kubernetes cluster is divided into two components:

• Cluster master nodes that provide the core Kubernetes


services and orchestration of application workloads.
• Nodes that run your application workloads.

To run your applications and supporting services, you need


a Kubernetes node. An AKS cluster has one or more nodes,
which is an Azure virtual machine that runs the Kubernetes
node components and container runtime. The kubelet is the
Kubernetes agent that processes the orchestration requests
from the cluster master and scheduling of running the re-
quested containers. Virtual networking is handled by the
proxy on each node. The proxy routes network traffic and Figure 2: A microservice-oriented architecture, with functions isolated in their own services
manages IP addressing for services and pods (more about
pods later in this article). The container runtime is the com-
ponent that allows containerized applications to run and in-
teract with additional resources, such as the virtual network
and storage.

Replicated Load-Balanced Services


Replicated load-balanced services is probably one of the
simplest distributed patterns, and one that most are famil-
iar with when considering a replicated load-balanced appli- Figure 3: Kubernetes cluster components
cation. In such an application, illustrated in Figure 4, every
service is identical to every other service and all are capable
of supporting traffic. The pattern consists of a scalable
References
number of services with a load balancer in front of them. Burns, Brendan (O’Reilly,
The load balancer is typically either completely round-robin 2017). Designing Distributed
or uses some form of session stickiness. Systems: Patterns and
Paradigms for Scalable,
Stateless services are ones that don’t require a saved state Reliable Services.
to operate correctly. In the simplest stateless applications,
even individual requests may be routed to separate instanc- Azure Kubernetes Service.
es of the service. Stateless systems are replicated to provide https://docs.microsoft.com/
redundancy and scale. azure/aks/

To create a replicated service in Azure Kubernetes Service, you


can use the AKS virtual node to provision pods that start in sec-
onds. With virtual nodes, you have fast provisioning of pods, Figure 4: Load balanced replicated stateless application
and only pay per second for their execution time. In a scal-
ing scenario, you don’t need to wait for the Kubernetes cluster
autoscaler to deploy VM compute nodes to run the additional Then, use the az aks scale command to scale the cluster
pods. A Kubernetes pod is a group of containers that are de- nodes. The following command scales a cluster named
ployed together on the same host. If you frequently deploy myAKSCluster to a single node. Provide your own --node-
single containers, you can generally replace the word “pod” pool-name from the previous command:
with “container” and accurately understand the concept.
az aks scale --resource-group myResourceGroup
If the resource needs of your application change, you can --name myCluster --node-count 1
manually scale an AKS cluster to run a different number of --nodepool-name <node pool name>
nodes. When you scale down, nodes are carefully cordoned
and drained to minimize disruption to running applications. This is a way to manually scale an AKS cluster to increase or
When you scale up, AKS waits until nodes are marked ready decrease the number of nodes. You can also use the cluster
by the Kubernetes cluster before pods are scheduled on them. autoscaler (currently in preview in AKS) to automatically
scale your cluster. The autoscaler component can watch for
To scale the cluster nodes, first get the name of your node pods in your cluster that can’t be scheduled because of re-
pool using the az aks show command. The following com- source constraints. When issues are detected, the number of
mand gets the node pool name for the cluster named my- nodes is increased to meet the application demand. Nodes
Cluster in the myResourceGroup resource group: are also regularly checked for a lack of running pods, with
the number of nodes then decreased as needed. This abil-
az aks show --resource-group myResourceGroup ity to automatically scale up or down the number of nodes
--name myCluster in your AKS cluster lets you run an efficient, cost-effective
--query agentPoolProfiles cluster.

codemag.com Design Patterns for Distributed Systems 43


• The cluster autoscaler watches for pods that can’t be
scheduled on nodes because of resource constraints. The
cluster automatically increases the number of nodes.
• The horizontal pod autoscaler uses the Metrics Server in
a Kubernetes cluster to monitor the resource demand
of pods. If a service needs more resources, the number
of pods is automatically increased to meet the demand.

Both the horizontal pod autoscaler and cluster autoscaler


can also decrease the number of pods and nodes as needed.
The cluster autoscaler decreases the number of nodes when
there has been unused capacity for a period of time.

Sharded Services
Replicating stateless services improves reliability, redun-
Figure 5: Scaling options for AKS dancy, and scaling. Within a replicated service, each replica
is entirely homogeneous and capable of serving every re-
quest. Another design pattern emerges in contrast to repli-
cated services, called sharded services. Figure 6 illustrates
Microservice Architecture how, with sharded services, each replica, or shard, is only
capable of serving a subset of all requests. A load-balancing
Microservices describe a
node is responsible for examining each request and distrib-
system built out of many
different components running uting each request to the appropriate shard for processing.
in different processes and
communicating over Replicated services are generally used for building stateless
defined APIs. services, whereas sharded services are generally used for build-
ing stateful services. The primary reason for sharding the data
is because the size of the state is too large to be served by a
Azure Kubernetes single ccomputer. Sharding enables you to scale a service in
Services response to the size of the state that needs to be served.
AKS provides a managed
Kubernetes service that Applications that run in Azure Kubernetes Service may need to
reduces the complexity store and retrieve data. For some application workloads, this data
for deployment and core Figure 6: Sharded load-balanced services respond to storage can use local fast storage on the node that is no longer
management. different user requests needed when the pods are deleted. Other application workloads
may require storage that persists on more regular data volumes
within the Azure platform. Multiple pods may need to share the
To use the cluster autoscaler, you need the aks-preview CLI same data volumes or reattach data volumes if the pod is resched-
extension version 0.4.1 or higher. Install the aks-preview ex- uled on a different node. Finally, you may need to inject sensitive
tension using the az extension add command, then check for data or application configuration information into pods.
any available updates using the az extension update command:
A good practice to reduce management overhead and let you
# Install the aks-preview extension scale is to not manually create and assign persistent volumes,
az extension add --name aks-preview as this would add management overhead and limit your ability
to scale. The recommendation, instead, is to use dynamic stor-
# Update the extension to make sure you have age provisioning by defining the appropriate policy to minimize
# the latest version installed unneeded storage costs once pods are deleted, and allow your
az extension update --name aks-preview applications to grow and scale as needed. Figure 7 refers to a
persistent volume claim (PVC) that lets you dynamically create
storage as needed. The underlying Azure disks are created as
pods request them. In the pod definition, you request a volume
It’s not advisable to enable to be created and attached to a designed mount path.
preview features on production
To define different tiers of storage, such as Premium and
subscriptions. Use a separate Standard, you can create a Storage Class. The Storage Class
subscription to test preview also defines the Reclaim policy. This Reclaim policy controls
the behavior of the underlying Azure storage resource when
features and gather feedback. the pod is deleted, and the persistent volume may no longer
be required. The following example uses Premium Managed
Disks and specifies that the underlying Azure Disk should be
Scaling containers apply to increased or decreased work- retained when the pod is deleted:
load, but also in response to scheduled application de-
mands. For example, you may want to adjust your cluster kind: StorageClass
between workdays and evenings or weekends. Figure 5 de- apiVersion: storage.k8s.io/v1
scribes the two options for AKS clusters to scale: metadata:

44 Design Patterns for Distributed Systems codemag.com


name: managed-premium-retain
provisioner: kubernetes.io/azure-disk
reclaimPolicy: Retain
parameters:
storageaccounttype: Premium_LRS
kind: Managed

You can create a persistent volume claim with the kubectl


apply command and specify the YAML file containing the
configuration parameters:

$ kubectl apply -f azure-premium.yaml


Figure 7: AKS cluster with persistent storage
persistentvolumeclaim/azure-managed-disk created
Modern Applications’
A persistent volume represents a piece of storage that has Requirements
been provisioned for use with Kubernetes pods. A persistent
Today’s requirements for
volume can be used by one or many pods and can be dynam-
high availability, reliability,
ically or statically provisioned. As written before, a storage
and scalability of applications
class is used to define how a unit of storage is dynamically dictate that real-world
created with a persistent volume. For more information on systems are built out of many
Kubernetes storage classes, see Kubernetes Storage Classes different components spread
(https://kubernetes.io/docs/concepts/storage/storage- across multiple computers.
classes/). The following command shows the pre-created
storage classes available within an AKS cluster:

$ kubectl get sc
Figure 8: User request and merged shard responses in the
NAME PROVISIONER AGE scatter-gather pattern
default (default) kubernetes.io/azure-disk 1h
managed-premium kubernetes.io/azure-disk 1h
ticular request. This pattern can be seen as sharding the com-
Modern application development often aims for stateless putation necessary to service the request, rather than sharding
applications, but stateful sets can be used for stateful ap- the data (although data sharding may be part of it as well).
plications, such as applications that include database compo-
nents. A StatefulSet is similar to a deployment in that one or To see an example of scatter-gather in action, consider the
more identical pods are created and managed. Replicas in a task of searching across a large database of documents for all
StatefulSet follow a graceful, sequential approach to deploy- documents that contain the words “car” and “Ferrari.” One
ment, scale, upgrades, and terminations. For more informa- way to perform this search is to open up all of the documents,
tion, see Kubernetes StatefulSets (https://kubernetes.io/ read through the entire set searching for the words in each
docs/concepts/workloads/controllers/statefulset/). document, and then return the set of documents that contain
both words to the user. As you might imagine, this is quite a
slow process because it requires opening and reading through
Scatter-Gather Pattern a large number of files for each request. To make request pro-
So far, I’ve examined systems that replicate for scalability in cessing faster, you can build an index. The index is effectively
terms of the number of requests processed per second (the a hashtable, where the keys are individual words (e.g., “car”)
stateless replicated pattern), as well as scalability for the size and the values are a list of documents containing that word.
of the data (the sharded data pattern). The scatter-gather
pattern in Figure 8 uses replication for scalability in terms Now, instead of searching through every document, finding
of time and allows you to achieve parallelism in servicing re- the documents that match any one word is as easy as doing a
quests, enabling you to service them significantly faster than lookup in this hashtable. However, one important ability was
you could if you had to service them sequentially. lost. Remember that you were looking for all documents that
contained “car” and “Ferrari.” Because the index only has sin-
Like replicated and sharded systems, the scatter-gather pat- gle words, not conjunctions with words, you still need to find
tern is a tree pattern with a root that distributes requests the documents that contain both words. Luckily, this is just an
and leaves that process those requests. However, in contrast intersection of the sets of documents returned for each word.
to replicated and sharded systems, scatter-gather requests Given this approach, you can implement this document search
are simultaneously farmed out to all of the replicas in the as an example of the scatter-gather pattern. When a request
system. Each replica does a small amount of processing and comes in to the document search root, it parses the request
then returns a fraction of the result to the root. The root and farms out two leaf computers (one for the word “car” and
server then combines the various partial results together one for the word “Ferrari”). Each of these computers returns a
to form a single complete response to the request and then list of documents that match one of the words, and the root
sends this request back out to the client. node returns the list of documents containing both terms.

Scatter-gather is quite useful when you have a large amount of  Stefano Tempesta
mostly independent processing that’s needed to handle a par- 

codemag.com Design Patterns for Distributed Systems 45


ONLINE QUICK ID 1909081

Nest.js Step-by-Step: Part 2


In the first part of this series (https://www.codemag.com/Article/1907081/Nest.js-Step-by-Step), you were introduced to Nest
Framework and you started building the To Do REST API using mock data. Mock data for development and testing isn’t sufficient
to make a realistic and ready-to-launch app. Using a database to store your data is part of the process and is mandatory

for making a great launch. In this article, I’ll use an instance • Entity Decorator to mark a JavaScript as an entity in
of PostgreSQL database running locally on a Docker con- the database
tainer. To access the database and execute the queries and • Column Decorator to customize the mapping between
mutations, I’ll make use of TypeORM, which is one of the most a JavaScript object property and the corresponding
mature Object Relational Mapping (ORM) tools in the world of column in the database. Customization includes speci-
JavaScript. Nest.js comes with built-in support for TypeORM. fying column data type, length, allow null or not, and
other useful settings.
The source code for this article series is available on this • A repository object per entity by auto-generating
GitHub repository: https://github.com/bhaidar/nestjs-todo- them. You can inject those objects in your Nest.js ser-
Bilal Haidar app/ and online at CODE Magazine, associated with this article. vices and start accessing the database.
bhaidar@gmail.com • Table relationships including one-to-one, one-to-
https://www.bhaidar.dev I’ll start by introducing TypeORM and its features, then ex- many, and many-to-many relationships. In the course
@bhaidar plore how Nest.js integrates with TypeORM. Finally, the step of this series, I’ll be using mostly one-to-many and
by step demonstration shows you how to convert the code many-to-many relationships.
Bilal Haidar is an
accomplished author, from Part 1 into code that is database-aware and eliminates
Microsoft MVP of 10 years, the use of any mock data. I strongly advise you give it a try if you’re serious about
ASP.NET Insider, and has using TypeORM in your professional projects. Access every-
been writing for CODE thing TypeORM at: https://TypeORM.io.
Magazine since 2007. 
Nest.js can deal with a rich variety How Nest Framework Integrates
With 15 years of extensive
experience in Web develop-
of databases ranging from relational with TypeORM
ment, Bilal is an expert in databases to NoSQL ones. The @nestjs/TypeORM package is a Nest.js module that
providing enterprise Web wraps around the TypeORM library and adds a few service
solutions. providers into the Nest.js Dependency Injection system. The
following is a list of services that are added by default:
He works at Consolidated What is TypeORM?
Contractors Company in
TypeORM is a JavaScript library that’s capable of connecting • TypeORM database Connection object
Athens, Greece as a full-
stack senior developer. to several database engines, including PostgreSQL, Micro- • TypeORM EntityManager object (used with data map-
soft SQL Server, and MongoDB, just to name a few. By hid- per pattern)
Bilal offers technical ing the complexity and specificity of connecting to different • TypeORM Repository object per entity (for each entity
consultancy for a variety database engines, TypeORM enables the communication be- defined in the application)
of technologies including tween your application and the back-end database of your
Nest JS, Angular, Vue JS, choice. Every time a service or controller in your application injects
JavaScript and TypeScript.  any of the above services, Nest.js serves them from within
TypeORM is built on top of TypeScript decorators that allow its Dependency Injection system.
you to decorate your entities and their corresponding prop-
erties so that they map to a database table with columns. You can check the source code for this module by following
this URL: https://github.com/nestjs/TypeORM.
TypeORM supports both the Active Record and Data Map-
per patterns. I won’t be touching on these topics, but you
can read more about them by following this link: https:// Demo
medium.com/oceanize-geeks/the-active-record-and-data- “The proof of the pudding is in the eating!”
mappers-of-orm-pattern-eefb8262b7bb.
This is a famous saying that has always haunted me since
In general, I prefer using the Data Mapper pattern and spe- I started software development. To best understand the
cifically using Repositories to access the database. TypeORM concepts that I’m going to explain, I recommend that you
supports the repository design pattern, so each entity has grab the source code from Part One of this series (in CODE
its own Repository object. These repositories can be ob- Magazine July/August 2019), and follow along to apply it
tained from the database connection itself. as you go. Another saying is practice makes perfect, so let’s
start!
In addition, TypeORM allows you to create your own custom
repository by letting you extend the standard base reposi- Setup Docker & PostgreSQL database
tory and add any custom functions that you need. Step 1: Open the application in your favorite editor and cre-
ate a new git branch to start adding database support.
Here’s a quick summary of TypeORM features that I’m going
to use in the application you’re about to build: git checkout –b add-db-support

46 Nest.js Step-by-Step: Part 2 codemag.com


Step 2: Install the NPM packages that are required to run by creating a new database user todo and a new database
the application and connect to the database by issuing the todo. It also makes the todo user a SuperUser and assigns it
following command: the password of password123.

yarn add @nestjs/TypeORM TypeORM pg Read more about customizing the initialization phase that dock-
er uses to instantiate a new Postgres container here: https://
This command installs three NPM packages: docs.docker.com/samples/library/postgres/#initialization-
scripts.
• @nestjs/TypeORM is the Nest.js module wrapper
around TypeORM. Step 5: If you’re working on a Windows computer, I recom-
• TypeORM is the official NPM package for TypeORM li- mend converting the script file above to a UNIX format by
brary. issuing the following command:
• pg is the official library connector for PostgreSQL da-
tabase. dos2unix init-users-db.sh

Step 3: Setup a docker-compose file to run an instance of If you can’t find the dos2unix utility on your computer, you
PostgreSQL on top of Docker. Docker is a prerequisite you can download it from this URL: https://sourceforge.net/
have to have on your computer in order to run this step. projects/dos2unix/.

version: ‚3' Step 6: Create the Docker container and build your data-
services: base. Let’s start by adding a new NPM script to the package.
 db: json file to make the task of starting up Docker and creating
container_name: todo_db the container an easier task. Add the following script:
image: postgres:10.7
volumes: "run:services": "docker-compose up && exit 0"
  - ./db/initdb.d:
/docker-entrypoint-initdb.d To start up the Postgres container, you simply run the fol-
ports: lowing command:
  - ‚5445:5432'
yarn run "run:services"
The docker-compose file you’ll use for the application is sim-
ple. The file defines a single service called db. The Docker This command creates the Postgres container, a new Post-
file defines the following settings for the db service: greSQL database, and a database user.

• container_name: The name Docker assigns to the new Step 7: Verify that the database is up and running by open-
container ing a new command line session and running the following
• image: The image that Docker needs to download and docker command:
instantiate a new container based on that image
• volumes: Docker creates a container volume by map- docker exec -it todo_db bash
ping a local directory in your application into the
/docker-entrypoint-initidb.d directory inside the The command above starts an interactive docker session on
Postgres container. the container instance that you’ve just created. By default,
• ports: Port mapping allows Docker to expose the Post- running this command opens a bash session for you to run
greSQL service running inside the container on port and execute your commands. To verify that the database ex-
5432 to the host computer on port 5445. ists, run the following commands:

To run any initialization script when Docker first creates • Connect to Postgres engine: psql –U postgres
the container, you can place the script file inside the /db/ • List the existing database: \l
initidb.d directory and Docker automatically runs it upon
creating the container. You should be able to see an entry for the todo database.

Listing 1: Bash file


By defining a Docker volume #!/usr/bin/env bash
mapped to a local directory, set -e
you have more control
psql -v
to your Docker container ON_ERROR_STOP=1
--username "$POSTGRES_USER"
--dbname "$POSTGRES_DB" <<-EOSQL
CREATE USER todo;
Step 4: Create a new bash script file inside the /db/initdb.d CREATE DATABASE todo ENCODING UTF8;
GRANT ALL PRIVILEGES ON DATABASE todo TO todo;
directory. Listing 1 shows the content of this file.
ALTER USER todo WITH PASSWORD ‚password123';
The script starts by including a shebang line that makes it ALTER USER todo WITH SUPERUSER;
an executable script. The script is straightforward. It starts EOSQL

codemag.com Nest.js Step-by-Step: Part 2 47


You can read more about docker exec command by following this • Type: The type of the database that TypeORM is con-
URL: https://docs.docker.com/engine/reference/commandline/ necting to. In this case, the type is “postgres”.
exec/. • Host, port, username, password, and database:
These settings resemble the details of the connection
Configure TypeORM string that TypeORM uses to connect to the underlying
Step 1: Initialize TypeORM by creating a new ormconfig. database.
json file at the root of your application. Simply paste the • Synchronize: A value of true means that TypeORM will
JSON content in Listing 2 into the JSON file: auto-synchronize to the application code and the da-
tabase structure every time the application runs. This
TypeORM expects this configuration file to create a connec- is good for development but you should avoid using it
tion to the database, deal with database migrations, and in production. Preferably, you should create database
all things related. TypeORM supports a variety of means to migrations for each and every change you make in the
store the connection options including JSON, JS, XML, and application code, even when in development. Make it a
YML files, and also environment variables. habit. It’ll guarantee a smooth transition when you are
ready to deploy your app and database to production.
You can read more about the TypeORM connection options • Logging: If you enable this setting, TypeORM will emit
by following this URL: https://github.com/TypeORM/Type some logging messages on the application’s console
ORM/blob/master/docs/using-ormconfig.md. when it’s running. This is helpful in the development
phase.
The most important connection options that you need for • Entities: This is the path where TypeORM finds the en-
the application are: tities your application is using and maps to a table in
the database.
• Name: The name of the configuration settings. You can • Migrations: This is the path where TypeORM finds the
have one named “development” and another named “pro- migrations you create and runs them against the da-
duction”. You need one set of configurations per running tabase.
environment.

Step 2: Amend the tsconfig.json file of your application to


Listing 2: ormconfig.json file look like the content of Listing 3.
{
"name": "development", The next few steps will make heavy use of the @todo/ path,
"type": "postgres", as you will see.
"host": "localhost",
"port": 5445,
"username": "todo",
"password": "password123",
"database": "todo", With Nest.js, there’s a variety of
"synchronize": false,
"logging": true, ORMs to use in your application.
"entities":
["src/**/*.entity.ts", "dist/**/*.entity.js"], You can even develop your own
"migrations": integration with any other model
["src/migration/**/*.ts", "dist/migration/**/*.js"],
"subscribers": and make it work inside Nest.js.
["src/subscriber/**/*.ts", "dist/subscriber/**/*.js"],
"cli": {
  "entitiesDir": "src",
  "migrationsDir": "src/migration",
  "subscribersDir": "src/subscriber" Step 3: Change all references to TodoEntity and TaskEntity
} in the application code to match the ones here:
 }
import { TodoEntity }
from ‚@todo/entity/todo.entity';
Listing 3: tsconfig.json file import { TaskEntity }
from ‚@todo/entity/task.entity';
{
 "compilerOptions": {
"target": "es5", Later, when you start creating migrations, TypeORM requires
"module": "commonjs", having the same exact reference path to all entities stored
"moduleResolution": "node", in the database. For instance, if the application code refer-
"outDir": "./dist",
"baseUrl": "./", ences the same entity in two different places with two dif-
"removeComments": true, ferent paths, TypeORM assumes that these are two different
"emitDecoratorMetadata": true, entities. To make sure that TypeORM is happy, use the same
"experimentalDecorators": true, path for all entities all over the application code.
"sourceMap": true,
"lib": ["es2015"],
"paths": { Step 4: I’m making use of a few helper methods that wrap
  "@todo/*": ["src/todo/*"], TypeORM API calls. Create a new /src/shared/utils.ts. Copy
  "@shared/*": ["src/shared/*"]
} the functions from Listing 4.
 },
 "exclude": ["node_modules"] The getDbConnectionOptions() function reads the TypeORM
} configuration settings from the ormconfig.json file based

48 Nest.js Step-by-Step: Part 2 codemag.com


Listing 4: /shared/utils.ts file
import { getConnectionOptions, getConnection } };
from ‚TypeORM';
export const getDbConnectionOptions = async ( export const getDbConnection = async (
 connectionName: string = ‚default', connectionName: string = ‚default') => {
) => {  return await getConnection(connectionName);
 const options = await getConnectionOptions( };
process.env.NODE_ENV || ‚development',
 ); export const runDbMigrations = async (
 return { connectionName: string = ‚default') => {
...options,  const conn = await getDbConnection(connectionName);
name: connectionName,  await conn.runMigrations();
 }; };

on the current Node environment. Remember back in Step Listing 5: /src/app.module.ts file
8, when you assigned a name for the configuration set- @Module({})
tings object as development? This is a convenient trick that export class AppModule {
proves helpful, especially when you move the application  static forRoot(
to production. So you can easily add another configuration connOptions: ConnectionOptions): DynamicModule {
return {
object with the name production.   module: AppModule,
  controllers: [AppController],
The getDbConnection() function retrieves a connection   imports: [TodoModule],
from TypeORM.   providers: [AppService],
};
 }
The runDbmigrations() function runs the pending migra- }
tions using the active database connection.

Step 5: Let’s change the AppModule and make it return a


DynamicModule by accepting TypeORM connection settings. Listing 6: /src/app.module/ts – Import TypeORM
@Module({})
Locate the /src/app.module.ts file and replace its content export class AppModule {
with the code in Listing 5.  static forRoot(
connOptions: ConnectionOptions): DynamicModule {
return {
The module now defines a forRoot() method that accepts   module: AppModule,
the connection options and returns a DynamicModule.   controllers: [AppController],
  imports: [TodoModule,
The DynamicModule is a normal Nest.js module that you re- TypeORMModule.forRoot(connOptions)],
  providers: [AppService],
turn and customize the way you want. You can read more };
about DynamicModule on: https://docs.nestjs.com/modules.  }
}
Step 6: Import the TypeORMModule by replacing the content
of AppModule with the code in Listing 6.

You simply import the TypeORMModule into the imports sec- getDbConnectionOptions(process.env.NODE_ENV)),
tion of the AppModule and passing the connection options  );
as follows:
The code makes use of the help function defined to load the
TypeORMModule.forRoot(connOptions) TypeORM connection options based on the current executing
environment and then it feeds the results to the AppMod-
Now that the root module of your app imports TypeORMModule, ule.forRoot() method.
let’s continue and see how to import it for the feature module.
Step 2: Let’s automate the process to run database migra-
Bootstrap Nest.js app tions while the application is bootstrapping.
Step 1: Amend the /src/main.ts file to load the TypeORM
connection options and pass them to AppModule. The code By default, Nest.js expects that you generate or create your
bootstraps the application and creates an instance of App own database migrations using TypeORM CLI commands and
Module. then run them against the database.

Replace the line below inside the main.ts file: Open the /src/main.ts file and add the following line of
code just beneath the code block that creates the AppMod-
const app = await NestFactory.create(AppModule); ule.

With the code below:  /**


  * Run DB migrations
 const app = await NestFactory.create(   */
AppModule.forRoot(await  await runDbMigrations();

codemag.com Nest.js Step-by-Step: Part 2 49


Listing 7: TodoEntity and TaskEntity classes
import { TaskEntity } from ‚@todo/entity/task.entity'; }
import {
 Entity, import {
 PrimaryGeneratedColumn,  Entity,
 Column,  PrimaryGeneratedColumn,
 CreateDateColumn,  Column,
 OneToMany,  CreateDateColumn,
} from ‚TypeORM';  ManyToOne,
} from ‚TypeORM';
@Entity(‚todo') import { TodoEntity } from ‚@todo/entity/todo.entity';
export class TodoEntity {
 @PrimaryGeneratedColumn(‚uuid') id: string; @Entity(‚task')
 @Column({ type: ‚varchar', nullable: false }) name: string; export class TaskEntity {
 @Column({ type: ‚text', nullable: true })  @PrimaryGeneratedColumn(‚uuid') id: string;
description?: string;  @Column({ type: ‚varchar', nullable: false }) name: string;
 @CreateDateColumn() createdOn?: Date;  @CreateDateColumn() createdOn?: Date;
 @CreateDateColumn() updatedOn?: Date;
 @ManyToOne(type => TodoEntity, todo => todo.tasks)
 @OneToMany(type => TaskEntity, task => task.todo)  todo?: TodoEntity;
 tasks?: TaskEntity[]; }

TypeORM You defined the runDbMigrations() method back in Step 4 Step 2: Import the TypeORMModule into the TodoMod-
of the Configure TypeORM section. ule feature module. With this import, you need to list all
TypeORM is an ORM that TypeORM entities that you will require a repository for while
can run in NodeJS, Browser,
Now when the application is bootstrapping and before it coding the module.
Cordova, PhoneGap, Ionic,
starts listening to new HTTP requests, it runs any pending mi-
React Native, NativeScript,
Expo, and Electron platforms
gration against the database and makes sure that the appli- @Module({
and can be used with cation entity model is in sync with the database table model.  imports:
TypeScript and JavaScript [TypeORMModule.forFeature([TodoEntity,
(ES5, ES6, ES7, and ES8). This is exactly what happens when you set synchronize: TaskEntity])],
“true” in the ormconfig.json file. However, being able to  controllers: [TodoController, TaskController],
decide when to run migrations puts you in the driver’s seat  providers: [TodoService, TaskService],
with greater control on migrations. })
export class TodoModule {}
Define Entity Relationships
Step 1: Let’s convert the TodoEntity and TaskEntity objects You import the module by adding it to the list of imports
to real TypeORM entities. on the feature module and specifying the list of entities to
manage and have their repositories ready for use.
The rule is simple. To map a JavaScript entity object to a Post-
greSQL database table, you decorate the entity object with @ Step 3: Generate a migration!
Entity(name: string) decorator offered by TypeORM API. You
can read up on TypeORM entities here: https://github.com/ Before running a migration, make sure the ormconfig.json
TypeORM/TypeORM/blob/master/docs/entities.md. file has all the necessary information it needs to instruct
TypeORM about the location of the migrations on disk.
To map a property on the entity object to a column on the
database table, you decorate the property with @Column() If you recall in Step 8, one of the settings was to specify the
decorator, also offered by TypeORM API. Listing 7 shows local directory where migration files are stored inside.
both entities decorated and ready to be used by TypeORM to
create their corresponding database tables. There are many "migrations": ["src/migration/**/*.ts",
decorators offered by TypeORM. "dist/migration/**/*.js"]

For example, you have relation decorators to define a re- It’s important to add this configuration setting before you
lation between one entity and another, listener decorators generate or run your migrations.
to react to events triggered by TypeORM, and many oth-
ers. Here’s a list of all decorators supported by TypeORM: As a prerequisite, you also need to install the tsconfig-
https://github.com/TypeORM/TypeORM/blob/master/docs/ paths NPM package. This package helps loading modules
decorator-reference.md. whose location is specified in the paths section of the
tsconfig.json file.
TodoEntity defines a property tasks of type array of TaskEn-
tity. Here, the relation is that one ToDo entity has one or Previously, you’ve added some paths to the tsconfig.json file
many Task entities. Hence the code uses @OneToMany() so that you can always use a unique path to refer to the To-
decorator for this purpose. doEntity and TaskEntity entities. Because of that, this NPM
package is needed. To install the package as a dev depen-
On the other side of the relation, TaskEntity defines a prop- dency, issue the following command:
erty todo of type TodoEntity. One Task belongs to one and
only one ToDo entity. yarn add tsconfig-paths -D

50 Nest.js Step-by-Step: Part 2 codemag.com


Now, to generate a migration, let’s first add an NPM script Listing 8: TodoController class
so that you don’t have to write a long command every time
@Post()
you want to generate a new migration.
 @UsePipes(new ValidationPipe())
 async create(@Body() todoCreateDto:
"TypeORM": TodoCreateDto): Promise<TodoDto> {
"ts-node -r tsconfig-paths/register    return await this.todoService.createTodo(todoCreateDto);
./node_modules/TypeORM/cli.js",  }
"migration:generate":
 @Put(‚:id')
"yarn run TypeORM migration:generate -n",  @UsePipes(new ValidationPipe())
 async update(
The first script is to run the TypeORM CLI using the ts-node    @Param(‚id') id: string,
Typescript execution and REPL for Node.js.    @Body() todoDto: TodoDto,
 ): Promise<TodoDto> {
   return await this.todoService.updateTodo(todoDto);
To generate a new migration, you simply use the following  }
command:

yarn run "migration:generate" InitialMigration


Listing 9: DTO classes
The command compares the entity objects in the application export class TodoDto {
to the corresponding database tables (if already present)  @IsNotEmpty()
and then generates the necessary steps so that both models  id: string;
are in sync.
 @IsNotEmpty()
 name: string;
Step 4: Run the migrations!
 createdOn?: Date;
Previously, you’ve configured the application to run any  description?: string;
pending migrations during the bootstrapping phase.  tasks?: TaskDto[];
}
To run the new migrations that you’ve generated, simply export class TodoCreateDto {
start the application and migrations will run automatically.  @IsNotEmpty()
 name: string;
To run the application, issue the following command:
 @MaxLength(500)
 description?: string;
yarn run start:dev }

Step 5: Change the TodoController class. There isn’t much


change required except decorating the update() and cre-
ate() methods with the @UsePipes() decorator. validation decorators that the class-validator library offers,
visit: https://github.com/typestack/class-validator.
In Nest.js, Pipes allow you to transform data from one for-
mat to another. You can even use Pipes to perform data Listing 9 shows all the DTO objects annotated with the
validation on the input that the client passes with the HTTP proper validation rules.
Request.
You can check the validation annotations on TaskDto and
Nest.js comes with two built-in Pipes: ValdiationPipe and TaskCreateDto objects on the GitHub repository of this ar-
ParseIntPipe. You use the former to add validation over the ticle or by checking out the online version of this article.
input parameters. Use the latter to validate and convert an
input parameter into a valid integer value. Use Repositories within your Services
Amend the TodoService class. There are major changes to
You can also create your own custom Pipe. Go to: https:// this class to adjust the way the code manages data. Instead
docs.nestjs.com/pipes for more. of using an in-memory array to store the To Do items and
tasks, you’ll vary that to connect directly to the database.
For this project, you’re going to use ValidationPipe. But
first, you need to install the following NPM packages be- You start by injecting the Repository<TodoEntity> instance
cause the ValidationPipe uses them internally. as follows:

yarn add class-transformer class-valdiator constructor(


   @InjectRepository(TodoEntity)
Switch to the TodoController class and amend both the up-    private readonly todoRepo:
date() and create methods as shown in Listing 8. Repository<TodoEntity>,
 ) {}
The code uses the @UsePipes() decorator passing a new
instance of the ValidationPipe class. Now, to add validation The @nestjs/TypeORM defines the @InjectRepository()
rules on the TodoCreateDto and TodoDto objects, let’s deco- decorator. Its role is to retrieve a Repository instance for a
rate the DTO objects with a few validations that are offered specific entity from the Nest.js Dependency Injection system
by class-validator NPM package. For the complete list of and make it available for the service.

codemag.com Nest.js Step-by-Step: Part 2 51


Listing 10: getOneTodo() function
async getOneTodo(id: string): Promise<TodoDto> { `Todo list doesn't exist`,
const todo = await this.todoRepo.findOne({ HttpStatus.BAD_REQUEST,
where: { id }, );
relations: [‚tasks', ‚owner'], }
});
return toTodoDto(todo);
if (!todo) { }
throw new HttpException(

Listing 11: toTodoDto() function


export const toTodoDto = (data: TodoEntity): TodoDto => {    todoDto = {
 const { id, name, description, tasks } = data;      ...todoDto,
     tasks: tasks.map(
 let todoDto: TodoDto = { (task: TaskEntity) => toTaskDto(task)),
   id,    };
   name,  }
   description,
 };  return todoDto;
};
 if (tasks) {

Listing 12: createTodo() function Let’s take a look at the toTodoDto() utility function that
maps a TodoEntity to a TodoDto object. You could use some
async createTodo(todoDto: TodoCreateDto): Promise<TodoDto> { other advanced libraries like automapper but for this ar-
   const { name, description } = todoDto;
ticle, I’ve decided to keep it simple and create my own map-
   const todo: TodoEntity = await this.todoRepo.create({ per function. Listing 11 shows the toTodoDto() function.
     name,
     description, Listing 12 shows the code for the TodoService createTodo()
   });
function.
   await this.todoRepo.save(todo);
   return toPromise(toTodoDto(todo)); The Repository object exposes the create() function to cre-
 } ate a new instance of an entity. Once you create a new in-
stance of TodoEntity, you save the entity in the database by
using another function exposed by the Repository object,
@nestjs/TypeORM package Earlier, when you imported TypeORMModule.forFeature which is the save() function.
([TodoEntity, TaskEntity]) into the module, Nest.js reg-
The @nestjs/TypeORM istered a Repository service provider (token and factory
package wraps TypeORM into method) for each and every Entity in the Nest.js Depen- Conclusion
a native Nest.js module that dency Injection System. In this article, you’ve seen how easy it is to connect a Nest.
integrates seamlessly with the js application to a database using the TypeORM library.
Nest.js Dependency Injection Next, you’ll look at a few methods. The rest you can find
system
in the source code accompanying this article at the GitHub In the upcoming article, you’ll be looking at adding and
repository and in the online version of this issue of CODE dealing with users and authentication modules.
Magazine.
Happy Nesting!

 Bilal Haidar
A best practice is to always inject 
Repositories inside your Services
rather than working directly with
Repositories inside your Controllers.

Listing 10 below shows how to implement getOneTodo()


method. The method uses the findOne() function, avail-
able on the Repository instance, to query for a single To-
doEntity based on the TodoEntity ID column. In addition,
it returns all of the related TaskEntity lists on this object. If
the entity isn’t found in the database, the code throws an
HttpException. Otherwise, it converts the TodoEntity object
into a TodoDto objects and returns the data to the calling
controller.

52 Nest.js Step-by-Step: Part 2 codemag.com


ONLINE QUICK ID 1909091

Cross-Platform Mobile Development


Using Flutter
For a number of years, mobile developers have had to grapple with maintaining multiple code bases of their apps—one for
each platform. And for a number of years, that meant developing simultaneously for iOS, Android, Windows Phone, and even
Blackberry. Fortunately, that didn’t last. Today, the mobile platform wars yielded two winners: iOS and Android.

Even so, developers dread having to maintain dual code on this page are pretty clear and self-explanatory, and I won’t
bases for their apps unless it’s totally necessary. Companies repeat them here.
are also trying to avoid maintaining multiple code bases;
otherwise they need to have separate teams of developers For the development environment, you can use Android
specializing in each platform. Studio or Visual Studio Code. I prefer Visual Studio Code.
To configure Visual Studio Code to support your Flutter de-
In recent years, cross-platform development frameworks have velopment, check out this page: https://flutter.dev/docs/
emerged as the life savers for developers, with Xamarin taking development/tools/vs-code.
the lead with its Xamarin suite of development frameworks for
Wei-Meng Lee cross-platform mobile development. And more recently, Face- Creating Your First Flutter Project
weimenglee@learn2develop.net book’s React Native proves to be a hit with mobile developers, Once the SDK and tools are set up, you are ready to create
www.learn2develop.net allowing developers to create mobile apps using JavaScript, a your first Flutter application. The easiest way is to type the
@weimenglee language that’s already familiar to a lot of full-stack developers. following command in Terminal:
Wei-Meng Lee is a
technologist and founder Not wanting to be left out of the burgeoning mobile market, $ flutter create hello_world
of Developer Learning in late 2018, Google announced Flutter 1.0, its latest cross-
Solutions (http://www. platform framework for developing iOS and Android apps. Note that Flutter project names must be in lower case and
learn2develop.net), a tech- In this article, I’ll give you an introduction to Flutter. By you can use the underscore character (_) if you need to use
nology company special- the end of this article, you ‘ll be on your way to developing a separator for the project name (just don’t use camel case).
izing in hands-on training some exciting mobile apps using Flutter! The above command creates a folder named hello_world
on the latest technologies. containing a number of files forming your project.
Wei-Meng has many years
of training experiences Getting Started with Flutter To examine the content of the Flutter project created for
and his training courses Flutter is Google’s portable UI toolkit for building natively you, open the hello_world project using Visual Studio Code.
place special emphasis compiled mobile, Web, and desktop apps using Google’s
on the learning-by-doing Dart programming language.
approach. His hands-on
approach to learning Flutter has the following major components: You can open up your Flutter
programming makes under-
standing the subject much
project by dragging the project
• Flutter engine: Written in C++, provides low-level
easier than reading books, rendering support using Google’s Skia graphics library folder into Visual Studio Code.
tutorials, and documenta-
• Foundation Library: Written in Dart, provides a base
tion. His name regularly
layer of functionality for apps and APIs to communi-
appears in online and print
cate with the engine Figure 1 shows the content of the Flutter project.
publications such as DevX.
com, MobiForge.com, and • Widgets: Basic building blocks for UI
CODE Magazine. Of particular interest are the following files/folders:
In the next couple of sections, I’ll show you how to install
Flutter and start writing your first Flutter application. Once • The main.dart file in the lib folder: This is the main
you’ve gotten started with the basics, you’ll create a news file of your Flutter application.
reader application that demonstrates how easy it is to write • The ios folder: This is the shell iOS application that
compelling mobile apps with Flutter. runs on your iOS device/simulator.
• The android folder: This is the shell Android applica-
Installing Flutter tion that runs on your Android device/emulator.
To develop cross-platform iOS and Android mobile apps with • The pubspec.yaml file: This file contains references to
Flutter, you need to use a Mac. For this article, I’m going to the various packages needed by your application.
base my examples on the Mac. Before you get started, you need
to ensure that you have the following components installed: To run the application, you need the following:

• Xcode • iOS Simulator(s) and/or Android emulator(s)


• Android Studio • iOS device(s) and/or Android devices(s)

To install Flutter on your Mac, head over to this page: https:// The easiest way to test the application is to use the iOS
flutter.dev/docs/get-started/install/macos. The instructions Simulator and Android emulator. For the Android emulator,

54 Cross-Platform Mobile Development Using Flutter codemag.com


I T IY
U AL
H Q N G
G I
HI M ISS
IS

Figure 1: Visual Studio Code showing the files in the Flutter project

open Android Studio and create an AVD. For iOS Simulator, ),


the simplest way to launch it is to use the following com- ),
mand in Terminal: ),
),
import '$ open -a simulator );

Once the iOS Simulator and Android emulator are launched, you To run the application on a particular device, use the fol-
can run the flutter application using the following command: lowing command:

$ cd hello_world $ flutter run -d <device_id>


$ flutter run -d all
The <device_id> is highlighted.
The above command runs the application on all connected de-
vices/simulators/emulators. If you want to know which devices/ When the application has successfully loaded onto the simula-
simulators/emulators are connected, use the following command: tor and emulator, you should see them, as shown in Figure 2.

$ flutter devices
Understanding How Flutter Works
You should see something like the following (I have bolded To learn how Flutter works, it’s good to look at the main.
the device IDs): dart file in the hello_world project and see how the vari-
ous components work. Frankly, it’s not the easiest way to
import 'pack2 connected devices: learn Flutter because the various statements in the file can
be quite overwhelming for the beginning developer. That’s
Android SDK built for x86 • emulator-5554 why I’ll start off with the bare minimum and build up the
• android-x86 • Android 9 (API 28) (emulator) application from scratch.
iPhone Xʀ • 95080E0D-F31B-4938-9CE7-01830B07F7D0
• ios Widgets
• com.apple.CoreSimulator.SimRuntime.iOS-12-2 Unlike other cross-platform development frameworks (like
(simulator) Xamarin and React Native), Flutter doesn’t use the plat-
form’s native widgets. For example, in React Native, the

codemag.com Cross-Platform Mobile Development Using Flutter 55


and state. When the state changes, the widget rebuilds its
description and the framework compares it with the previ-
ous description to determine the minimal changes needed
to update the UI.

Flutter doesn’t rely on the device’s


OEM widgets. It renders every
view’s components using its own
high-performance rendering engine.

Types of Widgets
In Flutter, there are two main types of widgets:

• Stateless widgets: Changing the properties of state-


less widgets has no effect on the rendering of the
widget.
• Stateful widgets: Changing the properties of stately
widgets triggers the life cycle hooks and updates its UI
using the new state.

Before you look at how to create stateless and stateful wid-


gets, let’s erase the entire content of the main.dart file and
replace it with the following statements:

import 'package:flutter/material.dart';

void main() => runApp(


Figure 2: The hello_world application running on the simulator and emulator Center(
child: Container(
margin: const EdgeInsets.all(10.0),
color: Color(0xFFFFBF00),
width: 300.0,
height: 100.0,
child: Center(
child:Text(
'Hello, CODE Mag!',
textDirection: TextDirection.ltr,
style:TextStyle(
color:Color(0xFF000000),
fontSize:32,
)
),
),
),
),
);
Figure 3: How Flutter works

<view> element is translated natively into the UIView ele- Hot-reload has no effect on the
ment on iOS and the View element on Android. Instead,
Flutter provides a set of widgets (including Material Design
root widget; in general, when you
and Cupertino—iOS—widgets), managed and rendered di- perform a hot-reload, the main()
rectly by Flutter’s framework and engine. function won’t be re-executed and
Figure 3 shows how Flutter works. Widgets are rendered no changes will be observed.
onto a Skia canvas and sent to the platform. The platform
displays the canvas and sends events back to the app.
The main() function is the main entry point for your appli-
In Flutter, UI are represented as widgets. Widgets describe cation. The runApp() function has a widget argument; this
how the view should look, given its current configuration argument will become the root widget for the whole app. In

56 Cross-Platform Mobile Development Using Flutter codemag.com


this example, Container (which is a widget) is the root wid- ),
get of the application. As the name implies, the Container ),
widget is used to contain other widgets, and in this case, ),
it contains the Center widget, which, in turn, contains the ),
Text widget and displays the string “Hello, CODE Mag!” ),
)
If you’ve run the application previously from Terminal, you );
don’t need to stop the application in order for the applica-
tion to be updated. Flutters supports two types of update:

• Hot reload (press “r” in Terminal). This option al-


lows you to update the UI without restarting the ap-
plication.
• Hot restart (press “R” in Terminal). This option al-
lows you to restart the application.

Figure 4 shows what happens when you press “R” to hot-


restart the application. For this example, hot-reload has no
effect as all of the UIs are defined in the root widget. You’ll
see hot-reload in action later on when I discuss stateless
and stateful widgets.

Figure 5 shows the application running on the simulator


and emulator.

Using the MaterialApp and CupertinoApp Classes


The example in the previous section has a dark background
and doesn’t look like a traditional iOS or Android applica-
tion. Flutter provides two main convenience widgets that
wrap your widgets in the design styles for the iOS and An-
droid platforms: Figure 4: Performing a hot restart in Terminal

• MaterialApp: The MaterialApp class represents an ap-


plication that uses material design. It implements the
Material design language for iOS, Android, and Web.
• CupertinoApp: The CupertinoApp class represents an
application that uses Cupertino design. It implements
the current iOS design language based on Apple’s Hu-
man Interface Guidelines.

Let’s now wrap the widget using the MaterialApp class:

import 'package:flutter/material.dart';

void main() => runApp(


MaterialApp(
title: 'Material App Demo',
home: Scaffold(
appBar: AppBar(
title: Text('Material App Demo'),
),
body:
Center(
child: Container(
margin: const EdgeInsets.all(10.0),
color: Color(0xFFFFBF00),
width: 300.0,
height: 100.0,
child: Center(
child:Text(
'Hello, CODE Mag!',
textDirection: TextDirection.ltr,
style:TextStyle(
color:Color(0xFF000000),
fontSize:32,
) Figure 5: The application running on the iOS Simulator and Android emulator

codemag.com Cross-Platform Mobile Development Using Flutter 57


Hot restarting the app shows the application displayed in
MaterialApp style (see Figure 6).

In addition to the MaterialApp, you can also use the Cu-


pertinoApp class to make your application look like a native
iOS application:

import 'package:flutter/cupertino.dart';

void main() => runApp(


CupertinoApp(
title: 'Cupertino App Demo',
home: CupertinoPageScaffold(
navigationBar: CupertinoNavigationBar(
middle: const Text('Cupertino App Demo'),
),
child:
Center(
child: Container(
margin: const EdgeInsets.all(10.0),
color: Color(0xFFFFBF00),
width: 300.0,
height: 100.0,
child: Center(
child:Text(
'Hello, CODE Mag!',
textDirection: TextDirection.ltr,
style:TextStyle(
color:Color(0xFF000000),
fontSize:32,
)
Figure 6: Applying the MaterialApp class to the application ),
),
),
),
)
),
);

Figure 7 shows how the application looks when you use the
CupertinoApp class.

Stateless Widgets
So far, you have a pretty good idea of how UI in Flutter is
created using widgets. In the previous section, the UI was
created all in the runApp() function. A much better way to
build the UI is to “componentize” the widget into indepen-
dent widgets so that they can be reused. So now let’s try to
reorganize the code so that the UI is written as a stateless
widget.

To create a stateless widget:

• Name the new Widget class and extend it from State-


lessWidget.
• Implement the build() method, with one argument of
type BuildContext and return type of Widget.

Here is the template for a stateless widget:

class MyCustomWidget extends StatelessWidget {


@override
Widget build(BuildContext context) {
return Center(
...
Figure 7: Applying the CupertinoApp class to the application );

58 Cross-Platform Mobile Development Using Flutter codemag.com


Listing 1: Creating a stateless widget
import 'package:flutter/cupertino.dart'; MyCustomWidget(this.name);

void main() => runApp( @override


CupertinoApp( Widget build(BuildContext context) {
title: 'Cupertino App Demo', return
home: CupertinoPageScaffold( Center(
navigationBar: CupertinoNavigationBar( child: Container(
middle: const Text('Cupertino App Demo'), margin: const EdgeInsets.all(10.0),
), color: Color(0xFFFFBF00),
child: width: 300.0,
Column( height: 100.0,
mainAxisAlignment: MainAxisAlignment.center, child: Center(
children: <Widget>[ child:Text(
MyCustomWidget("CODE Mag"), 'Hello, $name!',
], textDirection: TextDirection.ltr,
), style:TextStyle(
) color:Color(0xFF000000),
), fontSize:32,
); )
),
class MyCustomWidget extends StatelessWidget { ),
//---all properties in stateless widget must ),
// declare with final or const--- );
final String name; }
}
//---class constructor---

} child: Container(
} margin: const EdgeInsets.all(10.0),
color: Color(0xFF80D8FF),
Listing 1 shows the previous UI rewritten as a stateless width: 300.0,
widget. height: 100.0,
child: Center(
Hot restart the application and you should see the same child:Text(
output as shown in Figure 7.

Now, add another instance of the MyCustomWidget to the


main.dart file:

void main() => runApp(


CupertinoApp(
title: 'Cupertino App Demo',
home: CupertinoPageScaffold(
navigationBar: CupertinoNavigationBar(
middle: const Text('Cupertino App Demo'),
),
child:
Column(
mainAxisAlignment: MainAxisAlignment.center,
children: <Widget>[
MyCustomWidget("CODE Mag"),
MyCustomWidget("world"),
],
),
)
),
);

Hot restart the application and you should see that there
are now two instances of MyCustomWidget (see Figure 8).

Do you still remember about the hot-reload that I mentioned


earlier? Modify the color in the stateless widget as follows:

@override
Widget build(BuildContext context) {
return
Center( Figure 8: Displaying two instances of MyCustomWidget

codemag.com Cross-Platform Mobile Development Using Flutter 59


'Hello, $name!', )
textDirection: TextDirection.ltr, ),
style:TextStyle( ),
color:Color(0xFF000000), ),
fontSize:32, );
}

When you now hot reload the app (press “r” in Terminal),
you’ll see the colors of the MyCustomWidget change im-
mediately (see Figure 9).

Stateful Widgets
Stateless widgets are useful for displaying UI elements that
don’t change during runtime. However, if you need to dy-
namically change the UI during runtime, you need to create
stateful widgets.

Stateful widgets don’t exist by themselves: They require


an extra class to store the state of the widget. To create a
stateful widget:

• Name the new Widget class and extend it from State-


fulWidget.
• Create another class that extends from the State class,
of the type that extends from the StatefulWidget base
class. This class will implement the build() method,
with one argument of type BuildContext and return
type of Widget. This class will maintain the state for
the UI to be updated dynamically.
• Override the createState() function in the Stateful-
Widget subclass and return an instance of the State
subclass (created in the previous step).

The following shows the template for creating a stateful widget:

class MyCustomStatefulWidget extends


StatefulWidget {

Figure 9: Changing the color of the stateless widget and using hot-reload to update it immediately //---constructor with named

Listing 2: Creating a stateful widget


class MyCustomStatefulWidget extends StatefulWidget { )
MyCustomStatefulWidget({Key key, this.country}) : ),
super(key: key); Center(
final String country; child: GestureDetector(
onTap: () {
@override setState(() {
_DisplayState createState() => _DisplayState(); ++counter;
} });
},
class _DisplayState extends State<MyCustomStatefulWidget> { child: Container(
int counter = 0; decoration: BoxDecoration(
shape: BoxShape.rectangle,
@override color: Color(0xFF17A2B8),
Widget build(BuildContext context) { ),
return Center( child: Center(
child: Container( child: Text(
margin: const EdgeInsets.all(10.0), '$counter',
color: Color(0xFFFFBF00), style: TextStyle(fontSize: 25.0),
width: 300.0, ),
height: 100.0, ),
child: Center( ),
child: Column( ),
mainAxisAlignment: MainAxisAlignment.center, ),
children: <Widget>[ ],
Text( ),
widget.country, ),
textDirection: TextDirection.ltr, )
style:TextStyle( );
color: Color(0xFF000000), }
fontSize:32, }

60 Cross-Platform Mobile Development Using Flutter codemag.com


// argument: country---
MyCustomStatefulWidget(
{Key key, this.country}) : super(key: key);

//---used in _DisplayState---
final String country;

@override
_DisplayState createState() => _DisplayState();
}

class _DisplayState extends


State<MyCustomStatefulWidget> {

@override
Widget build(BuildContext context) {
return Center(
//---country defined in StatefulWidget
// subclass---
child: Text(widget.country),
);
}
}

Using the earlier example, let’s now create a stateful widget


by appending the code in bold (as shown in Listing 2) to
main.dart.

To make use of the stateful widget, add it to the runApp()


function, like this:
Figure 10: Adding the stateful widget to the app
import 'package:flutter/cupertino.dart';

void main() => runApp(

CupertinoApp(
title: 'Cupertino App Demo',
home: CupertinoPageScaffold(
navigationBar: CupertinoNavigationBar(
middle: const Text('Cupertino App Demo'),
),
child:
Column(
mainAxisAlignment:
MainAxisAlignment.center,
children: <Widget>[
MyCustomWidget("Code Mag"),
MyCustomWidget("world"),

MyCustomStatefulWidget(key:Key("1"),
country:"Singapore"),
MyCustomStatefulWidget(key:Key("2"),
country:"USA"),
],
),
)
), Figure 11: The widget tree in the stateful widget
);

Performing a hot restart yields the UI, as shown in Figure 10. ment in the constructor: MyCustomStatefulWidget({Key
Clicking on the blue strip increments the counter. key, this.country}).
• The country property is used in the _DisplayState
Observe the following: class, and it can be referenced by prefixing it with the
widget keyword.
• The MyCustomStatefulWidget class has a property named • Our stateful widget tree contains the widgets, as
country. This value is initialized through the named argu- shown in Figure 11.

codemag.com Cross-Platform Mobile Development Using Flutter 61


• The value of counter is displayed within the Text wid-
get. When the GestureDetector detects a tap on the
blue strip on the widget, it calls the setState() func-
tion to change the value of counter.
• Modifying the value of counter using the setState()
function causes the build() function to be called
again; and those widgets that reference the counter
variable are updated automatically.

Building the News Reader Project


By now, you should have a good understanding of how Flut-
ter works. The best way to learn a new framework is to build
a sample app and see how the various components fall in
place, so let’s now build a complete working application.

For this project, you’ll create a news application that displays


the news headline in a ListView, as shown in Figure 12.

When the user taps on a particular headline, the application


navigates to another page and loads the details of the news
in a WebView (see Figure 13).

For fetching the news headlines, you can use the following
API: https://newsapi.org/v2/top-headlines?country=us&ca
tegory=business&apiKey=<api_key>

You can apply for your


Figure 12: Displaying the news feed in a ListView own free News API key
from https://newsapi.org.

Creating the Project


Let’s first start by creating the project:

$ cd ~
$ flutter create news_reader

Adding the Package


For this project, you need to use the HTTP package so that
you can connect to the News API. Add the following state-
ment in bold to the pubspec.yaml file:

dependencies:
flutter:
sdk: flutter
http:

Once you save the changes to the pubspec.yaml file, Visual


Studio automatically fetches the package and installs it on
your local drive. Alternatively, you can use the following
command to manually download the packages:

$ flutter packages get

Importing the Packages


In the main.dart file. add the following statements in bold:

import 'package:flutter/material.dart';

// for Future class


Figure 13: Using a WebView to load the news article import 'dart:async';

62 Cross-Platform Mobile Development Using Flutter codemag.com


Listing 3: Populating the ListView
class _MyHomePageState extends State<MyHomePage> { // news image
int _counter = 0; Image.network(_news['articles'][index]['urlToImage']),
subtitle: _news['articles'][index]['title'] == null ?
Map _news = {"articles":[]}; Text("Loading...") :
Text(_news['articles'][index]['title'],
@override style: TextStyle(fontSize: 15, fontWeight:
void initState() { FontWeight.bold)),
super.initState(); );
downloadHeadlines(); // download from News API }
}
void _incrementCounter() {
Future<http.Response> fetchNews(String url) { setState(() {
return http.get(url); _counter++;
} });
}
convertToJSON(http.Response response) {
if (response.statusCode == 200) { @override
setState(() => { Widget build(BuildContext context) {
_news = jsonDecode(response.body) return Scaffold(
}); appBar: AppBar(
} title: Text(widget.title),
} ),
body:
downloadHeadlines() { ListView.builder(
fetchNews('https://newsapi.org/v2/top-headlines?' + itemCount: _news['articles'].length,
'country=us&category=' + itemBuilder: _buildItemsForListView,
'business&apiKey=<api_key>') ),
.then( (response) => { //---comment out the following statements---
convertToJSON(response) /*
}); floatingActionButton: FloatingActionButton(
} onPressed: _incrementCounter,
tooltip: 'Increment',
ListTile _buildItemsForListView( child: Icon(Icons.add),
BuildContext context, int index) { ),
return ListTile( */
title: _news['articles'][index]['urlToImage'] == null ? );
// default image }
Image.network('https://bit.ly/2WtOm6N') : }

Access the News API using https://newsapi.org/v2/top-hea Flutter and Dart


// for http dlines?country=us&category=business&apiKey=<api_key>.
import 'package:http/http.dart' as http; Although Flutter uses the
Once the result is obtained, paste the result into a JSON Dart language, it’s actually
// for JSON parsing formatter, such as http://jsonlint.com. Figure 14 shows the quite an easy language to pick
import 'dart:convert'; JSON result formatted. up. For readers who want a
quick start to Dart, check out
my Dart Cheat Sheet at
Updating the Title of the App In particular, you’re interested in extracting the following:
https://weimenglee.blogspot.com/
Make the following modifications to the main.dart file:
2019/04/dart-cheat-sheet.html.
• All of the articles referenced by the articles key
void main() => runApp(MyApp()); • For each article, extract the values of title, descrip-
tion, url, and urlToImage
class MyApp extends StatelessWidget {
@override Populating the ListView
Widget build(BuildContext context) { Add the statements in bold to the main.dart file as shown
return MaterialApp( in Listing 3.
title: 'News Headline',
theme: ThemeData(
primarySwatch: Colors.blue,
), A Future object represents
home: MyHomePage(title:
'News Headline'),
the results of asynchronous
); operations
}
}

Accessing the News API Here is what you’ve added to the main.dart file:
The News API returns a JSON string containing a summary of
the various news headlines. The first thing you need to do is • You created a variable named _news and initialized it
to examine the structure of the JSON result returned and see as a map object with the one key. articles, and set it
which of the parts you need to retrieve for your application. to an empty list. Later you’ll connect it to the News

codemag.com Cross-Platform Mobile Development Using Flutter 63


API, retrieve the news article that you want, and as- • You use the ListView.builder() function to build the
sign the values to the _news variable. ListView, passing it the number of rows to create, as
• You overrode the initState() function so that when well as the function (_buildItemsForListView) that
the page is loaded, it calls the downloadHeadlines() populates each row of the ListView.
function to download the content from the News API.
• The fetchNews() function connects to the News API
and returns a Future object of type http.Response.
• The convertToJSON() function converts the content The ListTile class represents a row
downloaded from the News API and encodes it into a in the ListView. The title argument
JSON object.
• The _buildItemsForListView() function returns a typically takes in a Text widget,
ListTile containing the UI for each row in the List- but it can take any widget.
View. At this point, each row contains an image and a
title for the news.

You can now test the application on the iOS Simulator and
Android emulator. Type the following command in Terminal:

$ cd news_reader
$ flutter run -d all

The applications should now look like Figure 15.

Implementing Pull-to-Refresh
The next thing to do is to implement pull-to-refresh so that
you can update the news feed by pulling down the ListView
and then releasing it. Add the statements in bold to the
main.dart file as shown in Listing 4.

To ensure that the ListView supports pull-to-refresh, use the


Figure 14: Formatting the JSON string so that it’s easier to understand its structure RefreshIndicator widget and set its child to the ListView.

Listing 4: Adding pull-to-refresh support


void _incrementCounter() { child:
setState(() { ListView.builder(
_counter++; itemCount: _news['articles'].length,
}); itemBuilder: _buildItemsForListView,
} ),
onRefresh: _handleRefresh,
Future<Null> _handleRefresh() async { ),
downloadHeadlines(); //---comment out the following statements---
return null; /*
} floatingActionButton: FloatingActionButton(
onPressed: _incrementCounter,
@override tooltip: 'Increment',
Widget build(BuildContext context) { child: Icon(Icons.add),
return Scaffold( ),
appBar: AppBar( */
title: Text(widget.title), );
), }
body: }
RefreshIndicator(

Listing 5: Customizing the content of the ListTile


ListTile _buildItemsForListView( TextStyle(fontSize: 20,
BuildContext context, int index) { fontWeight: FontWeight.bold)),
return ListTile(
leading: _news['articles'][index]['urlToImage'] == null ? _news['articles'][index]['description'] == null ?
CircleAvatar(backgroundImage: Text("", style: TextStyle(fontSize: 15,
NetworkImage('https://bit.ly/31l2Q7Q')) fontStyle: FontStyle.italic))
: :
CircleAvatar(backgroundImage: Text(_news['articles'][index]['description'], style:
NetworkImage(_news['articles'][index]['urlToImage'])), TextStyle(fontSize: 15, fontStyle:
title: FontStyle.italic)),
Column(children: <Widget>[
Divider(height: 20.0,)
_news['articles'][index]['title'] == null ?
Text("", style: TextStyle(fontSize: 20, ]),
fontWeight: FontWeight.bold)) trailing: Icon(Icons.keyboard_arrow_right),
: );
Text(_news['articles'][index]['title'], style: }

64 Cross-Platform Mobile Development Using Flutter codemag.com


builder() function. Then, set its onRefresh argument to
the _handleRefresh() function, which calls the download-
Headlines() function again to download the news content.

Customizing the Content of the ListTile


Instead of displaying an image and the news title on each
row, it would be better to display the news title and its de-
scription, followed by a smaller image.

Add the statements in bold to the main.dart file that are


shown in Listing 5.

Here, you use the leading argument of ListTile class to dis-


play an icon using the CircleAvatar class. The title argument
is then set to a Column object, which in turn contains the
title and description of the article. You also added a Divider
object, which displays a faint line between the rows.

Perform a hot-reload of the app. You should now see up-


dated ListView, as shown in Figure 16.

Converting to a Navigational Application


Now that the ListView displays the list of articles, it would
be nice if the user could tap on an article to read more about
it. For this, you’re going to create a details page that will be
used to display the content of the article.

Add the following statements in bold to the main.dart file:

import 'package:flutter/material.dart';

// for Future class Figure 15: The first cut of the news reader application
import 'dart:async';

// for http
import 'package:http/http.dart' as http;

// for JSON parsing


import 'dart:convert';

// to store the data to pass to another widget


class NewsContent {
final String url;
NewsContent(this.url);
}

void main() => runApp(MyApp());


...

The NewsContent class is used to store the URL of the ar-


ticle so that it can be passed to the details page. Append
the following block of code to the end of the main.dart file:

class DetailsPage extends StatelessWidget {


// to hold the data passed into this page
final NewsContent data;

// create a constructor for the page with


// the data parameter
DetailsPage({Key key, @required this.data}) :
super(key:key);

@override
Widget build(BuildContext context) {
String url = data.url;
return Scaffold( Figure 16: The second cut of the application

codemag.com Cross-Platform Mobile Development Using Flutter 65


Listing 6: Creating the Details Page
ListTile _buildItemsForListView( fontStyle: FontStyle.italic))
BuildContext context, int index) { :
return ListTile( Text(_news['articles'][index]['description'], style:
leading: _news['articles'][index]['urlToImage'] == null ? TextStyle(fontSize: 15, fontStyle:
CircleAvatar(backgroundImage: FontStyle.italic)),
NetworkImage('https://bit.ly/31l2Q7Q'))
: Divider(height: 20.0,)
CircleAvatar(backgroundImage:
NetworkImage(_news['articles'][index]['urlToImage'])), ]),
title: trailing: Icon(Icons.keyboard_arrow_right),
Column(children: <Widget>[
onTap: () {
_news['articles'][index]['title'] == null ? Navigator.push(
Text("", style: TextStyle(fontSize: 20, context,
fontWeight: FontWeight.bold)) MaterialPageRoute(builder: (context) => DetailsPage(
: data: NewsContent(_news['articles'][index]['url']),)),
Text(_news['articles'][index]['title'], style: );
TextStyle(fontSize: 20, },
fontWeight: FontWeight.bold)),
);
_news['articles'][index]['description'] == null ? }
Text("", style: TextStyle(fontSize: 15,

appBar: AppBar( The DetailsPage takes in the data passed into it (which is
title: Text("Details Page"), the URL of the article) and displays it in the center of the
), page.
body: Center(
child: Text('$url'), Add the statements in bold to the main.dart file that are
), shown in Listing 6.
);
} The onTap argument specifies that when the user taps on
} a row in the Listview, it navigates (using the Navigator.
push() function) to the next page (DetailsPage) and passes
it the data (NewsContent).

Redeploy the application. Select a particular news headline


and you should see the details page as shown in Figure 17.

Adding a WebView
Displaying the URL of the article in the details page is not
very useful to the reader. What you really want to do is to
use a WebView to display the content of the article.

Add the following bolded statement to the pubspec.yaml


file:

dependencies:
flutter:
sdk: flutter
http:
webview_flutter:

The above statement adds the webview_flutter package to


the project.

To use WebView on iOS, you need to add the following bold-


ed statements to the Info.plist file located in the ios/Run-
ner folder:

<?xml version="1.0" encoding="UTF-8"?>


<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://
www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
...
<key>UIViewControllerBasedStatusBarAppearance</key>
<false/>
Figure 17: Displaying the news in another page <key>io.flutter.embedded_views_preview</key>

66 Cross-Platform Mobile Development Using Flutter codemag.com


<true/>
</dict>
</plist>

Add the following bolded statements to the main.dart file:

import 'package:flutter/material.dart';

// for Future class


import 'dart:async';

// for http
import 'package:http/http.dart' as http;

// for JSON parsing


import 'dart:convert';

import
'package:webview_flutter/webview_flutter.dart';

// to store the data to pass to another widget


class NewsContent {
final String url;
NewsContent(this.url);
}

...

class DetailsPage extends StatelessWidget {


// to hold the data passed into this page
final NewsContent data;
Figure 18: Displaying the news in the WebView
// create a constructor for the page with
// the data parameter
DetailsPage({Key key, @required this.data}) :
super(key:key);

@override
Widget build(BuildContext context) {
String url = data.url;
return Scaffold(
appBar: AppBar(
title: Text("Details Page"),
),
body: Center(
child: WebView(initialUrl: url,
javascriptMode:
JavascriptMode.unrestricted,)
),
);
}
}

Redeploy the application. Select a particular news headline


and you should see the news loaded in the WebView (see
Figure 18).

Displaying a Spinner
Now that the app is almost complete, let’s add a final touch
to it. When the article is loading in the WebView, let’s dis-
play a spinner so that the user can be visually aware that
the page is still loading. Once the entire article is loaded,
the spinner disappears. For this purpose, you can use the
flutter_spinkit package, which is a collection of loading
indicators written for Flutter. In particular, let’s use the
Figure 19: The spinner showing that the page is currently loading SpinKitFadingCircle widget.

68 Cross-Platform Mobile Development Using Flutter codemag.com


Listing 7: Rewriting DetailsPage as a stateful widget
import 'package:flutter/material.dart'; body:
Stack(
// for Future class children: <Widget>[
import 'dart:async'; WebView(
//---access data in the statefulwidget---
// for http initialUrl: widget.data.url,
import 'package:http/http.dart' as http; javascriptMode: JavascriptMode.unrestricted,

// for JSON parsing //---when the loading of page is done---


import 'dart:convert'; onPageFinished: (url) {
setState(() {
import displaySpinner = false;
'package:webview_flutter/webview_flutter.dart'; });
import 'package:flutter_spinkit/flutter_spinkit.dart'; },
),
... Container(
child: displaySpinner ?
class DetailsPage extends StatefulWidget { SpinKitFadingCircle(
final NewsContent data; itemBuilder: (_, int index) {
DetailsPage({Key key, @required this.data}) : super(key:key); return DecoratedBox(
decoration: BoxDecoration(
@override color: index.isEven ?
_DetailsPageState createState() => _DetailsPageState(); Colors.red : Colors.green,
} ),
);
class _DetailsPageState extends State<DetailsPage> { },
bool displaySpinner = true; ):
Container()
@override )
Widget build(BuildContext context) { ]
return Scaffold( )
appBar: AppBar( );
title: Text("Details Page"), }
), }

Add the following statement in bold to the pubspec.yaml file: decoration: BoxDecoration(
color: index.isEven ?
dependencies: Colors.red : Colors.green,
flutter: ),
sdk: flutter );
http: },
webview_flutter: ):
flutter_spinkit: Container()
)
To display the spinner, use the Stack widget to overlay the ]
WebView widget with the Container widget, which in turn )
contains the SpinKitFadingCircle widget when the WebView
is loading, and an empty Container widget when the loading Because of the need to dynamically hide the SpinKitFadingCir-
is complete, like this: cle widget when the WebView has finished loading, it’s neces-
sary to rewrite the DetailsPage class as a StatefulWidget.
Stack(
children: <Widget>[ Add the following bolded statements to the main.dart file
WebView( from Listing 7.
//---access data in the statefulwidget---
initialUrl: widget.data.url, Redeploy the application. Select a particular news headline
javascriptMode: JavascriptMode.unrestricted, and you should see the SpinKitFadingCircle widget display-
ing (see Figure 19).
//---when the loading of page is done---
onPageFinished: (url) {
setState(() { Summary
displaySpinner = false; Learning a new framework is always challenging. But I do hope
}); that this article has made it easier for you to get started with
}, Flutter. Let me know what you are using now (Xcode, Android
), Studio, Xamarin, or React Native) and if you plan to switch
Container( over to Flutter. You can reach me on Twitter @weimenglee
child: displaySpinner ? or email me at weimenglee@learn2develop.net.
SpinKitFadingCircle(
itemBuilder: (_, int index) {  Wei-Meng Lee
return DecoratedBox( 

codemag.com Cross-Platform Mobile Development Using Flutter 69


ONLINE QUICK ID 1909101

Add File Storage to Azure App Services:


The Work-Around
Azure App Service Plans in which Azure App Services run include 1, 10, 50, or 250 GB of disk space, depending on the pricing tier
you choose, but if you need more than that, or if you don’t want to pay the premium price just for extra disk space, you’re out
of luck. I recently worked on a project upgrading a website that required hundreds of GB of files be stored in a virtual directory

in the file system. When retrieved, some of the files referred The first hurdle we faced was preventing the Web server
to other files via a relative path. In addition, the files had to from attempting to find and retrieve the files because they
be protected by authentication and authorization. We had won’t be on the local drive where the website resides. The
to maintain the hierarchy of the file system and integrate fundamental purpose of Web servers, after all, is to retrieve
security so they could only be retrieved through the Web ap- requested HTML and other files and return them to the
plication, so we couldn’t just store the files in BLOB storage. browser. What I needed to happen instead was for IIS to al-
low me to handle the request. As far as IIS (the underlying
technology used to host App Services in Azure) goes, when
Mike Yeager it encounters file extensions at the end of a URL, it assumes
www.internet.com You can’t add drive space that you want to retrieve a file (through a process called
request filtering) and goes to the file system looking for
Mike is the CEO of EPS’s or additional drives it. This happens very early in the processing pipeline, long
Houston office and a skilled to Azure Web Services. before the request gets to ASP.NET, let alone any code under
.NET developer. Mike excels at my control. Luckily, IIS has a way to alter this behavior for
evaluating business require- a specific path. Because we had to modify how IIS works,
ments and turning them into the change had to be made in the web.config file, as shown
results from development We soon found out that the obvious solution, to create a in Figure 1.
teams. He’s been the Project
large file share in Azure and attach it to the App Service
Lead on many projects at
Plan, isn’t possible. You can’t add drive space or additional The configuration change shows how to override IIS’s de-
EPS and promotes the use of
modern best practices, such as
drives to Azure App Services. We spent many hours search- fault request filtering. In our case, all of the files in ques-
the Agile development para- ing for a solution and found that this is, in fact, a highly tion resided in a virtual directory named Media. With this
digm, use of design patterns, requested feature in Azure and that quite a lot of us would configuration change, any URL path that consists of the do-
and test-drive and test-first like to see a solution. At one point, we came across a new main, followed by /Media/, followed by anything, using any
development. Before coming to Azure feature, now in Preview, to do just what we needed, of the HTTP verbs listed (we only need the GET verb) will be
EPS, Mike was a business owner but soon found out that it’s only going to be available in passed to the TransferRequestHandler, which is the default
developing a high-profile Linux-based App Services. So close! Undaunted, we came up handler for ASP.NET MVC. That means that when the browser
software business in the leisure with a work-around. asks for anything in this path, instead of looking for the file
industry. He grew the business on disk and returning it, the server turns the request over
from two employees to over to ASP.NET MVC, which looks at the route to determine what
30 before selling the company The Work-Around to do next.
and looking for new challenges. Our application is written in ASP.NET MVC hosted on IIS in
Implementation experience Azure APP Services, but the principles can be applied to In order to alleviate the need for an action parameter in
includes .NET, SQL Server, many Web platforms. To re-state the problem, the browser the URLs for retrieving files and to allow for any number of
Windows Azure, Microsoft sometimes requests files from the server via a path to a vir- subfolders, you need to create a special route. For example,
Surface, and Visual FoxPro. tual directory. Many of the files returned from the server a typical request for a file might look like this:
contain relative paths to additional files and the files can’t
be modified, so the relative paths must continue to work. http://localhost:60234/media/somefolder/somefile.html
The amount of disk space available to the App Services is
far less than the amount we require. The goal is to make To accomplish this, add the following route before the de-
the server respond to the unmodified URLs as though it’s fault route in the RouteConfig.cs file, as shown in Figure 2.
retrieving files from a virtual directory (a very large one).
The expression {*filePath} tells ASP.NET MVC to handle
any number of additional parameters as a single parameter
named filePath.

Next, create a MediaController to handle the requests going


to the /Media/ path, as shown in Figure 3.

Now you can test the configuration by inserting a break-


point in the controller method, running the Web app, and
navigating to any file under the Media folder. For example:

Figure 1: The web.config with custom handler for /Media/* path http://localhost:60234/media/somefolder/somefile.html

70 Add File Storage to Azure App Services: The Work-Around codemag.com


VENDORS: ADD A REVENUE STREAM BY OFFERING ESCROW TO YOUR CUSTOMERS!

han
Less t
$ 1day!
pe r

Affordable High-Tech
Digital Escrow
Tower 48 is the most advanced and affordable digital escrow solution available. Designed and built specifically for software and other
digital assets, Tower 48 makes escrow inexpensive and hassle free. Better yet, as a vendor, you can turn escrow into a service you offer to
your customers and create a new revenue stream for yourself.
Regardless of whether you are a vendor who wants to offer this service to their customers, or whether you are a customer looking for extra
protection, visit our web site to start a free and hassle-free trial account or to learn more about our services and digital escrow in general!

Visit www.Tower48.com for more information!


a minute to set up in the Azure portal. So for most scenarios,
that’s an excellent choice, but not the only choice.

The next bit is what makes this solution really interesting.


In our case, the retrieved file often contained relative paths
to additional files it expected to be in the same folder struc-
ture. For example, the sample HTML page might reference a
JavaScript file named MyScript.js that it expects to find in
a /JavaScript/ folder under SomeFolder. When the sample
HTML page is loaded, the browser will try to load the fol-
lowing URL (or you can load the file directly by typing in
this address):

http://localhost:60234/media/somefolder/
javascript/myscript.js

This request, because it’s also in the Media path, will be


passed to the Media controller. The breakpoint we set earlier
in the controller method will show that this time, the file-
Path variable will have the value SomeFolder/JavaScript/
MyScript.js. Now we can retrieve that file and return it. In
addition, because the browser is handling relative paths,
the special folder .. is handled properly to refer to the par-
Figure 2: RoutConfig.cs with custom MVC route for /Media/* path. ent folder. For example, if MyScript.js references the file
../../this is a sample file.pdf, the resulting URL requested
by the browser will be “http://localhost:60234/media/this
is a sample file.pdf” as expected.

Another nice feature of this approach is that because the


requests are coming through an MVC Controller, you can use
the standard [Authorize] attributes on the method to only
allow calls from logged in users. You can also restrict ac-
cess to files based on the user’s role or even roll your own
security using the SecurityPrincipal information provided by
ASP.NET. In our case, this allowed us to secure the files and
prevent users who aren’t logged in from accessing them.

Figure 3: MediaController.cs with custom handling of file requests Not All Good News
As with every work around, there are some down sides.
Processing files through the ASP.NET MVC pipeline will cost
When you hit the breakpoint using this URL, you’ll find more in resources than simply allowing IIS to retrieve and
that the filePath parameter holds the value somefolder/ stream the files back to the caller. This means that you’ll
somefile.html. You can now use whatever code you like to need more memory and processing power than you would
come up with the contents of this file and return it to the when using a virtual directory. On balance, that’s not a bad
browser. You could even create the file right then and there tradeoff for the flexibility and control you’ll gain, but it’s
if you like, as in the example above. In our case, we chose something to be aware of, especially on a very high-volume
to use a private Azure BLOB storage container to hold the site.
actual files. The app logs into Azure BLOB storage, retrieves
the bytes, and passes them back to the browser. For all the Summary
browser knows, there’s a Media folder on the server with In this article, you saw how to override the default request
a subfolder named SomeFolder that contains a file named filtering that IIS uses for a specified URL path, giving you
somefile.html. control over how requests to that path are handled. You
saw how to configure a route in MVC to allow paths to a
It’s important to note that nothing has changed as far as depth of an arbitrary number of folders to be processed to
the browser is concerned. The link works exactly as it did in mimic a hierarchical file structure. Finally, you saw how to
the old system when there was a massive virtual directory to write a ASP.NET MVC controller that allows you to come up
serve files from. As showing the code to retrieve files from with the file to be returned in any way you can imagine and
Azure BLOB storage is pretty well covered in other places and return it to the browser without the browser knowing that
because setting up a BLOB storage account in order to run the file wasn’t being served up by IIS from disk. Perhaps
the code would complicate the sample, in this article, I’m us- some day, Microsoft will allow us to control the amount
ing files added to the solution as embedded resources as the of disk space we want our Azure App Services to have, but
source of the files I’m returning. Where you get the files is until then, there’s a workaround that’s not difficult to
really up to you. I will say that Azure BLOB storage allows file implement.
names with forward slashes in them that can naturally mim-
ic a hierarchical file system with folders, it recognizes and  Mike Yeager
stores mime types, and it’s quite inexpensive and only takes 

72 Add File Storage to Azure App Services: The Work-Around codemag.com


CODE COMPILERS

(Continued from 74) Time is a precious resource. Treat it with the same
care and diligence as you do your finances, if not
tasks that have to be done… and they cannot more so, and you will suddenly find that you are
be machined down or recast for these tasks. getting so much more out of it. Sep/Oct 2019
People are always ‘almost fits’ at best.” (p35) No Volume 20 Issue 5
matter who they are or what they know or how  Ted Neward
long they’ve been with the organization, they  Group Publisher
Markus Egger
will always have weaknesses that need shoring
up and/or a lack of knowledge that needs fixing Associate Publisher
Rick Strahl
and/or any of a dozen other situations that re-
Editor-in-Chief
quire your time and attention. Some people will Rod Paddock
be free about admitting what they don’t know;
Managing Editor
some will see the admission of ignorance as a Ellen Whitney
critical weakness; and some simply won’t real- Content Editor
ize that there are things they don’t know. It’s Melanie Spiller
on you to make sure that all three categories of Editorial Contributors
people are supported. Otto Dobretsberger
Jim Duffy
Jeff Etter
Summary Mike Yeager

So, what now? “Time is precious, yeah, we get it. Writers In This Issue
Bilal Haidar Wei-Meng Lee
What do we do, then?” Sahil Malik Peter Mbanugo
Ted Neward John V. Petersen
For starters, admit that you probably don’t know Paul D. Sheriff Stefano Tempesta
Shawn Wildermuth Mike Yeager
where the time goes in your day. Start a journal
Technical Reviewers
or a time log. Record yourself or get an activity- Markus Egger
tracking app for your laptop or mobile device Rod Paddock
that keeps a log of where you spend your day. Production
I’ll bet you a nice bottle of wine that however Franz Wimmer
King Laurin GmbH
you think you spend your time, there’s going 39057 St. Michael/Eppan, Italy
to be something in that report that surprises
Printing
you. Fry Communications, Inc.
800 West Church Rd.
Then, armed with this knowledge, start making Mechanicsburg, PA 17055
more specific decisions. If you choose to play a Advertising Sales
Tammy Ferguson
video game, because you need to relax, then do 832-717-4445 ext 26
so with finite limits put into place. (I allow my- tammy@codemag.com
self one co-op match on Starcraft II, for example, Circulation & Distribution
when I need to let my brain work on a problem General Circulation: EPS Software Corp.
subconsciously.) If you want to watch TV, go for The NEWS Group (TNG)
Newsstand: Ingram Periodicals, Inc.
it! Nothing wrong with that. Just make sure you Media Solutions
know when you’re going to stop. And so on. But Subscriptions
be aware of how you can also shift resources from Subscription Manager
one area into another—in my house, we’ve had Colleen Cade
ccade@codemag.com
maids come clean every week or so, because when
I compare the cost of having them clean the house US subscriptions are US $29.99 for one year. Subscriptions
against the time I (and my family) would have to outside the US are US $49.99. Payments should be made
in US dollars drawn on a US bank. American Express,
spend doing it (and not as well), it’s a total no- MasterCard, Visa, and Discover credit cards accepted.
brainer. Ditto for the gardener. This is one area Bill me option is available only for US subscriptions.
where I can shift capital to spend time on other Back issues are available. For subscription information,
e-mail subscriptions@codemag.com.
things.
Subscribe online at
Most of all, be very concerned with how your ac- www.codemag.com
tions waste other peoples’ time. When setting
CODE Developer Magazine
up a meeting, does every single individual need 6605 Cypresswood Drive, Ste 300, Spring, Texas 77379
to be there? Are they there because they will be Phone: 832-717-4445
contributors, or because you just want them to Fax: 832-717-4460
know the meeting is about to happen? Could
they get an hour back if you just sent them a
summary of the meeting afterwards? I routinely
ignore company all-hands meetings: I can use
that hour or two to get other things done, and
anything super-urgent that I need to know will
inevitably filter back to me either directly or
indirectly.

codemag.com Managed Coder 73


MANAGED CODER

On Time
For an industry that prides itself on its analytical ability and abstract mental processing, we often don’t
do a great job applying that mental skill to the most important element of the programmer’s tool
chest—that is, ourselves. I have a friend who continuously mocks me for any response that contains

the word “busy.” For the past year or so, she will helpfully calculate what that all multiplies into, Spending time
text me, asking some question or remarking on but the number isn’t the point; the point is that To be clear, you don’t have to be an effective ex-
some work idiocy, and it may take me more than time, of all things, is the most egalitarian re- ecutive to understand time and its impact on your
a day to reply. Almost without fail, I apologize source any of us will ever experience. No matter day. Drucker wrote, “… [M]ost of the tasks of the
for the late reply, citing some variation on the who you are, no matter where you were born, executive require, for minimum effectiveness,
phrase “I had stuff to do that kept me too busy to the color of your skin, or the balance in your ac- a fairly large quantum of time. To spend in one
respond.” Time, it seems, gets away from me on count, you get exactly the same amount of time stretch less than this minimum is sheer waste.
quite a regular basis. per day as the rest of us. And yet, for so many of One accomplishes nothing and has to begin all
us, that never seems to be enough. Where does over again. … To write a report may, for instance,
Peter Drucker would have a field day with that that time go? require six or eight hours, at least for the first
paragraph. draft. It is pointless to give seven hours to the
“Everything requires time. It is the one truly uni- task by spending fifteen minutes twice a day for
versal condition. All work takes place in time and three weeks. All one has at the end of the is blank
Drucker on Time uses up time. Yet most people take for granted paper with some doodles on it.” (p30) Drucker
“Effective executives know that time is the limit- this unique, irreplaceable, and necessary re- intuitively understood the concept of flow, and
ing factor. The output limits of any process are set source. Nothing else, perhaps, distinguishes ef- software developers almost universally intuitively
by the scarcest resource. In the process we call fective executives as much as their tender loving understand that as well. “The same goes for an
‘accomplishment,’ this is time.” (Effective Execu- care of time.” (p27) experiment. One simply has to have five to twelve
tive, p26) hours in a single stretch to set up the appara-
A few years ago, when money was a little tight tus and to do at least one completed run. Or one
It’s not like I don’t want to reply to my friend— at the house, my wife and I decided that we were has to start all over again after an interruption.”
she’s quite the interesting person to talk to, and going to shut off regular TV service. This was (p30) Substitute “experiment” and “apparatus”
we have some really good discussions about life, actually before the heyday of streaming-media with “feature” and “environment,” and that
philosophy, technology, and a few other things. platforms that the world enjoys now, so we were sentence could easily appear in any developer’s
It’s just that there’s always something else to do: making a pretty conscious decision to not have handbook or blog post.
prepare for a conference talk, meetings with col- access to a lot of the normal programming that
leagues and/or future colleagues, write an article cable subscribers enjoy. We still had the TV itself, Drucker doesn’t just end there, though: He also
for CODE Magazine, dinner, shower, sleep…. And of course—we hooked up various console gaming knew that people require time, too. I can’t have a
I know I’m certainly not the only one that wres- devices to it so that the four of us in the family firm connection with my colleagues or co-workers
tles with this problem—how many times have you could spend time collectively tuned in to a video or direct reports or managers if I can’t spend a
heard phrases like, “If I can find the time” or “If or DVD or game. (Mario Party is by far one of the minimum quantum of time with them. “The man-
there’s enough time” or “Time got away from me” best multi-player non-intense games ever made, ager who thinks that he can discuss the plans,
or any of the hundreds of permutations on that and that’s the hill I’ll die on if I must.) direction, and performance of one of his subor-
theme? It’s almost like time is this elusive, mysti- dinates in fifteen minutes is just deceiving him-
cal unicorn that only the most virtuous of us can What we found is that turning off the TV opened self. If one wants to get to the point of having an
find, capture, and enjoy. up a whole range of possibilities that we hadn’t impact, one needs probably at least an hour and
really spent much time exploring, both collec- usually much more. And if one has to establish
“Time is also a unique resource. … The supply of tively and individually. We dove much deeper a human relationship, one needs infinitely more
time is totally inelastic. No matter how high the into board games for our time together at the time.” (p31)
demand, the supply will not go up. There is no house. My Kindle exploded with a ton of books,
price for it and no marginal utility curve for it. fiction and non-, that I now have with me. Even If your job is to manage people—and that is the
Moreover, time is totally perishable and cannot as I write this, there’s no TV on anywhere in the job of managers, after all, foremost among all
be stored. Yesterday’s time is gone forever and house—everybody is engaged in some other other tasks—then there needs to be a steady,
will never come back. Time is, therefore, always pursuit, even video-gaming, that requires more deliberate and consistent investment of time on
in exceedingly short supply.” (p 27) attention and interaction than just drooling in your part. “People-decisions are time-consuming,
front of the TV. for the simple reason that the Lord did not cre-
We each of us receive exactly the same amount ate people as ‘resources’ for organization. They
of time per day: 24 hours, 60 minutes per hour Turning off the TV told us that a lot of our time do not come in the proper size and shape for the
and 60 seconds per minute. There are numer- was going to watching TV. Where do you spend
ous memes and motivational posters that will your time? (Continued on page 73)

74 Managed Coder codemag.com

Вам также может понравиться