On Remote Work

Back in February, Chris Herd (@chris_herd), CEO and Founder of FirstbaseHQ.com wrote a tweet on remote work.

It turned out to be a fairly along thread sharing the findings from conversations with 2.000+ companies. I have summarized it below as I think there are some very interesting points.

Most of us are used to working with distributed teams spanning multiple time zones. 4/5 hours from India to Europe and 9 hours from Europe across the US. We learn to work asynchronously and not ping someone in Teams just saying “Hi”, but actually state the purpose and question, so the recipient can reply in their own good time; added an amount of “small talk” (is it “small chat” if it happens in Teams?) depending on your cultural background 😊

Before you can talk Digital Transformation, you have to talk People Transformation. The points below confirms that.

The findings:

  • HQ’s are finished: companies will cut their commercial office space by 50-70%
    They will allow every worker to work from home 2-4 days a week, and come into the office 1-2 days a week
  • Fully distributed: ~30% of the companies we talk to are getting rid of the office entirely and going remote-first
    Companies doing this have seen their workers decentralize rapidly, leaving expensive cities to be closer to family
  • Access talent: The first reason they are going remote-first is simple – it lets them hire more talented people
    Rather than hiring the best person in a 30-mile radius of the office, they can hire the best person in the world for every role
  • Cut costs: The second reason they are going remote-first is because it lets them be far more cost-efficient
    Rather than spending $20000 / worker / year on office space they can provide the best remote setup on the planet for $2000 / worker / year
  • Remote burnout: The productivity inside the companies we’ve spoken to has gone through the roof
    Their biggest concern is that workers burnout because they are working too hard
    They are actively exploring ways to combat this
  • Remote onsites: 60%+ of companies we talk to are already thinking about ways to use time together physically to improve culture
    The most popular we hear is flying the team into remote locations for ~week. Portugal, Spain, Puerto Rico seem to be the most popular
  • Personal choice: the smartest people I know personally are all planning to work remotely this decade
    The most exciting companies I know personally all plan to hire remotely this decade
    ~90% of the workforces we’ve spoken to never want to be in an office again full-time
  • Async by default: is the thing that organizations are struggling with most
    The majority of companies have replicated the office remotely and it is causing strains that are beginning to show
  • Personal injury: These are exploding. Companies haven’t moved quickly enough to prevent them and back, neck and repetitive strain injuries are becoming a huge problem
    Expect this to remedy this quickly by providing better, ergonomic equipment to workers
  • Universal problems: doesn’t matter the size of the organization, every company is dealing with the same thing
    We spoke to early-stage companies, publicly listed tech companies, through to legacy incumbents with hundreds of thousands of employees
    All will be more remote
  • Pollution reduction: many companies we’ve spoken to care massively about the environmental impact that eradicating the office – and the commute – will have
    108 million tons of CO2 less every year
  • Quality of life: even more importantly companies are realizing that they don’t need to expect workers to waste 2 hours a day commuting to sit in an office chair for 8 hours
    Almost every company we talk to believes that their workers will be happier as a result of remote work
  • Remote pressure: a few companies we’ve spoken to have decided to be more remote than they initially intended because their competitors already did it
    There is a fear inside companies that if they don’t go remote they will lose their best people to their competitors
  • Remote fear: most companies aren’t scared about the quality of work that will be produces
    They are scared about intangible things they can’t measure
    ‘quality of communication’ & ‘collaboration in person’ & ‘water cooler chat’
    Many have realized these were excuses
  • Output over time: the measure of performance in the office is how much time you spend sat in your seat
    The measure of performance while working remotely has to become output.
    Tools that enable this to be tracked more accurately are something we are asked for a lot
  • Written over spoken: documentation is the unspoken superpower of remote teams. The most successful team members remotely will be great writes
    Companies are searching for ways to do this more effectively. Tools that enable others to write better will explode
  • Flattened orgs: middle management is in trouble, an unnecessary bottlenecks which serve no tangible purpose inside async organizations
    Companies need coaching and facilitators to maximize organizational effectiveness
  • Company Reports: Several companies are thinking about creating resort like compounds where work happens in person
    Expect these to be build in incredible locations and focused on providing the best on-site experience possible
  • Remote Laws: Many companies are beginning to operate under the assumption that the choice to work remotely will become a legal right
    This will give workers the options to choose where they work, and many companies are acting before they are forced
  • Meeting Death: Wasting 2 hours traveling to a meeting will end. The benefits on in-person are eroded by the benefits of not traveling
    Conferences and quarterly networking events will become more important for cultivating in-person relationships
  • Internal community: Team cohesion and company culture isn’t impossible remotely – but it’s very different
    In the same way companies are finally realizing that power of community externally – internally community may become even more important to a company’s success
Posted in Miscellaneous | Tagged | Leave a comment

Microsoft Ignite and IoT

A new weeks ago, Microsoft Ignite took place in Orlando, Fl. I was – unfortunately – not able to attend myself, but have found a number of interesting sessions and announcements around IoT, which I thought others might be interested in as well.

Enjoy 🙂

Posted in Azure, Conference, IoT | Tagged , , | Leave a comment

Guide to staying safe during a hacker conference

Some weeks back Microsoft held its bi-annual technical conference. This time, however, the event was moved from Seattle to Las Vegas and it happened to run right into when hackers were coming to Vegas for DefCon, the famed hacker convention that is notorious for publishing zero day exploits, running the Wall of Sheep, a website that showcases all of the people (and their information) that the attendees hacked during the conference.

To draw awareness to this and keep the rest of us safe in Vegas, a colleague shared some of his learnings to help keep everyone safe. It is good advice, which can be shared with all.

The following is a quick bulleted list of the things you should do when going to any Black Hat/White Hat security event:

  • Do not use public WiFi!
  • Always use your VPN (the real one, not AutoVPN if that is an option on your box) to ensure end to end encryption of network traffic, when connection to ANY network in Vegas.
  • Turn off 3G/4G on your phone (only allow LTE).
  • Disable NFC and Bluetooth on all of your devices.
  • Turn off Auto-Join WiFi networks for all devices.
  • Ideally only carry a clean, non-work device with no personal email, files or other accounts attached. Just use it as a dummy browser tool, and never log into any personal sites.
  • Do not go to personal sites such as banks and alike when out in the open.
  • NEVER TYPE YOUR PASSWORD in a public forum.
  • Make sure you system is fully patched with all of the latest security updates.
  • Turn off non-essential services such as:
    • File and Printer Sharing and NetBIOS over TCP/IP
    • Telnet
    • SSH/RDP
  • Do not use USB outlets to charge your phones. Always use a real electrical outlet.
  • Never use a USB drive that someone gives you or that you find on the ground.
  • Have a shielded wallet/purse/carrying case for your credit cards.
  • When using an ATM, make sure that there is not any loose devices attached to the card reader, nor any small cameras that are pointed to the keypad.
  • Change all of your passwords after you leave Vegas.

Resources:

Posted in Miscellaneous, Tips | Tagged | Leave a comment

Azure Machine Learning and Management REST API

I’m currently involved in an IoT project where we have to call a number of R-models hosted in Azure Machine Learning.

This post is not about publishing the models and calling the endpoints; this is pretty straight forward.

Rather this post is about utilizing the Management REST APIs.

I initially had problems with the authentication. To authenticate towards the endpoint (both the model and the management) you have to set the Authentication header to a JSON Web Token.

The following is a short guide on how to get everything working.

The first thing you have to find out is if your Azure ML endpoints are hosted the old/classic way or the new way.

  • If the former the security token can be found in Azure ML Studio under Settings for the given model.
  • If the later you need to create an AAD token.

For a number of reasons I have my endpoints hosted the new way, hence the need to get an AAD token. This can be a little tricky if you have never done it before.

That is really all there is to it.

A small code example is given below.

// Constants
var addInstance = "https://login.microsoftonline.com/{0}";
var tenant = "Contoso.onmicrosoft.com"
var authority = String.Format(CultureInfo.InvariantCulture, addInstance, tenant);
var clientId = "[Your Client Id]";
var appKey = "[The App Key]"
var subscriptionid = "[Your Azure Subscription Id]";
var resourceGroupName = "[Resource Group that hosts the Machine Learning workspace]";

// Create Authentication Context
var authContext = new AuthenticationContext(authority);
var clientCredential = new ClientCredential(clientId, appKey);

// Get Security Token
var azureMlResourceId = "https://management.azure.com/";
var result = await authContext.AcquireTokenAsync(azureMlResourceId, clientCredential);
var token = result.AccessToken;

var client = new HttpClient();

var address = $"subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearning/webServices?api-version=2016-05-01-preview";

client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", token);
var response = await client.GetAsync(endpoint + address);

if (response.IsSuccessStatusCode)
{
 // Do stuff
}

On a side note I should mention, that my solution is running inside Azure Service Fabric. I have Services and Actors call a library handling the actual communication that is making the REST call. I had to install the nuget packet in BOTH the library and the Service/Actor, otherwise I got an initialization error when trying to create the AuthenticationContext.

Posted in Azure, IoT | Tagged , | Leave a comment

Continuous deployment of Azure Service Fabric Application

I’m doing a fair amount of work where the backend is made up using a Azure Service Fabric Application.

Setting up a continuous build and deployment pipeline in VSTS (Visual Studio Team Services) I ran into the following problem when the deployment step ran:

[error]Exception while parsing XML file: d:\a\_temp\TestApplicationPackage_2648631722\o0cl5dgw.ynx\applicationpackage\ApplicationManifest.xml
FileName: d:\a\_temp\TestApplicationPackage_2648631722\o0cl5dgw.ynx\applicationpackage\ApplicationManifest.xml

I’m (still) using VS2015. The version of the Service Fabric SDK is 5.7.0-preview5718. The current fabric version is 5.6.210.9494.

So it did not look like an issue with the versions. Note, that there was a bug in the version 5.5 of the SDK, which gave you errors when you tried to deploy your application to a newer cluster.

When I created the deployment step in VSTS I just seleted the default task and made no modifications apart from pointing the environment to the correct cluster. Here you must be sure to use the correct method of authentication (I use certificate based).

Digging a little further I noticed that the default agent was “Hosted”. I changed this to “Hosted2017” and voila Problem solved.

For consistency I also changed the default agent on the build task to “Hosted2017”.

Now to set up semantic versioning and maybe upgrade my VS to 2017.

Posted in Azure, Tips | Tagged | Leave a comment

Death to the SLA

During my work I’m often asked what the SLA (Service Level Agreement) is for a given system. I’ve tried to summarize my reply in the following blog post.

The SLA way of thinking works badly with cloud services and the highly distributed systems we create these days.

The SLA of services is insufficient because there is going to be solution-specific code in many places, which often is where problems get introduces.

If, for arguments sake, you have a perfectly efficient, bug-free solution that will never exceed its scale targets, you will still have a problem. A solution build on multiple services – like most IoT solutions I work on – will experience failures. If we look at the services in a give hot path and look at what their SLA translates into monthly downtime (in minutes), we get the following (the actual SLA numbers may not be correct, but it does not matter for the conclusion):

Monthly downtime (minutes)
IoT Hub 99,9  43,83
Event Hub 99,9 43,83
Cloud Gateway/Service 99,95 21,92
Azure Blob Storge 99,9 43,83
Azure SQL DB 99,9 43,83
Document DB 99,95 21,92
Azure Table Storage 99,9 43,83
Azure Stream Analytics 99,9 43,83
Notification Hub 99,9 43,83
350,65

Worse case, each service would fail independently so their downtime is cumulative. That means that the SLA-acceptable downtime is 350 minutes or almost 6 hours per month which translates to a solution availability of 99.19%. And this is just the “unplanned” downtime. Also this assumes that you have no bugs or inefficient code in your solution.

Another issue with the old or traditional way of thinking is that failure to meet SLA often results in a service credit of some percentage. So, in effect, customers are really asking what’s the % of likelihood I’ll get a refund. This is completely the wrong question to ask. SLA doesn’t help build great solutions that meet users’ needs.

At this point, you may be thinking: this is all good, but the customer will still require some sort of guarantee as to the uptime. How do you answer that question?

Well, no one said it would be easy.

Usually I try to change the discussion:

  • What if we had an SLA and we went offline/down and the millions of IoT devices could not check in for an hour. How much money and goodwill would that cost you? Would an SLA of service credit for that hour make you whole?
  • What if we had an SLA and were up, but you had a bug – or something else outside the scope of the SLA – that took you down end-to-end for an hour. An SLA would not help at all in that situation.

So that means we need to take a different approach.

Given that an SLA would not solve things, we need to move past that and talk about how to be successful

So what is the desired approach?

  1. Design the app/service to be resilient
  2. Design the monitoring and operations tool and process to be ready for new services

#1 is usually a focus as most customer development teams are thinking this way. Howeer, the operations team are not.

#2 is usually the elephant in the room that blocks deployments. Either by operations or compliance as they still think the old way in terms of SLA.

Customers want an SLA because that is how they are used to thinking. If they really want to take full advantage of the cloud they need to move on.

 

 

Posted in Azure, IoT | Tagged | Leave a comment

Microsoft simplifies IoT and data analysis further

Next week is the yearly Hannover Messe in Hannover, Germany.

It is the main fair for manufacturing companies and their partners and Microsoft will, of course, have a strong present.

I’m actually going this year and must say that I’m quite excited. If a reader of this blog happens to be in Hannover next week, give me a shout; could be fun to meet in real life.

Microsoft has just announced a number of solutions that will greatly simplify IoT and help businesses speed up their digital transformation.

Microsoft IoT Central, a new software-as-a-service (SaaS) offering that reduces the complexity of IoT solutions. Microsoft IoT Central is a fully managed SaaS offering for customers and partners that enables powerful IoT scenarios without requiring cloud solution expertise.

A new preconfigured solution called Connected Factory also looks very promising.

If you are more into time series analysis, check out Azure Time Series Insights.

Time Series Insights gives you a near real time global view of your data across various event sources and lets you quickly validate IoT solutions and avoid costly downtime of mission-critical devices. It helps you discover hidden trends, spot anomalies, conduct root-cause analysis in near real-time, all without writing a single line of code through its simple and intuitive user experience. Additionally, it provides rich API’s to enable you to integrate its powerful capabilities in your own existing workflow or application.

See you in Hannover!

Posted in Azure, IoT | Tagged | Leave a comment

Simplifying IoT Architecture

I’ve been working with IoT projects for the last couple of years. A very common pattern is illustrated in the figure below:

Old architecture pattern

Your devices are sending in data. The Azure IoT hub is used as cloud gateway or ingestion point. You persist all the incoming messages so you can retrieve them later and at the same time you forward them to the Event Hub for (near) real-time processing. This is not illustrated, but you can have a consumer picking off messages from the Event Hub.

Until recently you had to create two consumer groups on your IoT hub and have Azure Stream Analytics do the forwarding. I’ve shown two jobs here, but depending on the load, you might have been able to do with just one, including two select statements.

With the introduction of Endpoints and Routes in the IoT hub and the Archive functionality in th Event Hub this pattern can be simplified quite a lot, cutting out components, hence making the architecture simpler, more manageable and more robust.

The new pattern is illustrated below:

New architecture pattern

We now use endpoints and routes to forward the messages to the Event Hub. It is possible to use filtering, so if this was done in Azure Stream Analytics it is not a problem.

Messages are archived to blob storage directly from the Event Hub. Note, that archived data is written in Apache Avro format.

Posted in Azure, IoT | Tagged , | Leave a comment

Minecraft Management Code Example

A couple of years ago I wrote a blob post where I mentioned a small taskbar utility I had written to stop and start an Azure virtual machine.

In the post I promised to put the code on GitHub. Well, that never happened and today someone asked again, so here is a link to OneDrive and a Zip-file.

Knock yourself out, but please note that this is sample code, no guarantees, bla bla bla.

Posted in Azure, Code | Leave a comment

Azure Management Libraries

This is the second blog post in the small series on experiences and learnings gained while setting up a Minecraft server for the kids. The first spoke primarily about Azure Automation, this one will touch upon the new .NET libraries for Azure Management.

The challenge was the following: enable the kids to start the virtual machine running the Minecraft server without giving them access to the overall subscription

We create a small app running in the taskbar. When the app starts it will show a yellow triangle indicating that the status of the virtual machine is being established.

image

Depending on the whether the instance status is StoppedDeallocated or ReadyRole either a red cross

image

or a green check mark will be shown

image

Right clicking will display the menu items (they should be self-explanatory)

image

For this to work a couple of setting values are required. They are the following:

  • Service Name: This is the name of the cloud service where you virtual machine is deployed.
  • Virtual Machine: This is the name of the virtual machine.
  • Management Certificate: The thumbprint of the management certificate for your subscription.
  • Subscription ID: The ID for your Azure subscription.

Easiest way to get the thumbprint and subscription ID is using the PowerShell command Get-AzurePublishSettingsFile. This will download a file containing both as well as some other information.

<?xml version="1.0" encoding="utf-8"?>
<PublishData>
  <PublishProfile
    SchemaVersion="2.0"
    PublishMethod="AzureServiceManagementAPI">
    <Subscription
      ServiceManagementUrl="https://management.core.windows.net"
      Id="5fbxxxxxxxxxxxxxxxxxxxxxxxxxxxfe06e"
      Name="[Name of your Azure subscripton]"
      ManagementCertificate="MIIKPAIBAzCeI2S2N5Sbz4kAyL60DtKY=" />
  </PublishProfile>
</PublishData>

The settings dialog can be seen below.

image

Note: Yes, I know. If you changes the values of the service name and the virtual machine you could start and stop other VMs, so this is not something you would give to your evil nephew. However, for the case of my kids then, with the fear of loosing their pocket money for the next 200 years, I think we are OK.

So much for the app, but how does it work? How to communicate with Azure?

Create a new project in Visual Studio (I’m using 2013, so I don’t know if it will work in 2012).

Load the Microsoft Azure Management Libraries using Nuget. This package contains everything.

image

You could do with only the Microsoft Azure Compute Management Library if you want to minimize the footprint, but why settle for anything but the whole package.

Before we can do anything we need to authenticate towards Azure.

The way this is currently done is by using a X509 certificate. So in my helper class I’ve created a small method returning a SubscriptionCloudCredentials. It can be seen below.

public SubscriptionCloudCredentials GetCredentials()
{
    return new CertificateCloudCredentials(this.subscriptionId, 
        new X509Certificate2(Convert.FromBase64String(this.base64EncodedCert)));
}

The subscriptionId and base64EncodedCert are two member variables containing the ID and certificate thumbprint.

Using the CloudContext it is possible to create a ComputeManagementClient. I’ve defined a private member

private ComputeManagementClient computeManagement;

and create it like

computeManagement =
    CloudContext.Clients.CreateComputeManagementClient(GetCredentials());

To get the DeploymentStatus you can call the following:

var status = this.computeManagement
	    .Deployments
	    .GetByName(this.serviceName, this.virtualMachineName)
	    .Status;

Where this.serviceName and this.virtualMachineName are two private string members containing the two values respectively.

To start the virtual machine I’ve defined an async method

public async Task StartVMAsync(DeploymentStatus status)

The reason for passing in the status is to check that if

status.Equals(DeploymentStatus.Running)

we return.

The actual call to start the virtual machine is

var task = await this.computeManagement
	.VirtualMachines
	.StartAsync(this.serviceName, this.virtualMachineName, this.virtualMachineName,
    new CancellationToken());

Likewise a StopVMAsync method is defined containing the call to stop the virtual machine:

var task = await this.computeManagement
	.VirtualMachines
	.ShutdownAsync(this.serviceName, 
				   this.virtualMachineName, 
				   this.virtualMachineName,
		new VirtualMachineShutdownParameters()
		{
			PostShutdownAction = PostShutdownAction.StoppedDeallocated
		},
		new CancellationToken());

And that is basically it. Of course the above should be packaged nicely and then called from the taskbar app.

Time permitting I will push the code to GitHub, Codeplex or similar for people to download.

The official Service Management Client Library Reference can be found on MSDN.

Posted in Azure | 7 Comments