Tuesday, February 18, 2020

System Engineering Guidelines


While building our system that powers memory intensive compute in Azure we use the following engineering guidelines. We use these guidelines to build our BareMetal resource provider, cluster manager etc. These are useful principles we have accumulated from experience building various systems. What other principles do you use and recommend including?
  1. Close on broad design before sending PRs.
    1. Add design as markdown to root of the feature code path and discuss it as a PR. For broad cross RP feature it is ok to place it in the root
    2. For sizable features please have a design meeting
    3. Requirement for a design meeting is to send a pre-read and expectation is all attendees have reviewed the pre-read before coming in
  2. Be aware of distributed system quirks
    1. Think CAP theorem. This is a distributed system, network partition will occur, be explicit about your availability and consistency model in that event
    2. All remote calls will fail, have re-tries that uses exponential back-off. Log warning on re-tries and error if it finally fails
    3. Ensure we always have consistent state. There should be only 1 authoritative version of truth. Having local data that is eventually consistent with this truth is acceptable. Know the max time-period for eventual consistency
  3. System needs to be reliable, scalable and fault tolerant
    1. Always avoid SPOF (Single Point of Failure), even for absolutely required resources like SQLServer consider retrying (see below), gracefully fail and recover
    2. Have retries
    3. APIs need to be responsive and return in sub second for most scenarios. If something needs to take longer, immediately return with a mechanism to track progress on the background job started
    4. All API and actions we support should have a 99.9 uptime/success SLA. Shoot for 99.95
    5. Our system should be stateless (have state elsewhere in data-store) and designed to be cattle and not pets
    6. Systems should be horizontally scalable. We should be able to simply add more nodes to a cluster to handle more traffic
    7. Choose to use a managed service over attempting to build it or deploy it in-house
  4. Treat configuration as code
    1. Breaks due to out of band config changes are too common. So consider config deployment the same way as code deployment (Use SCD == Safe Config/Code Deployment)
    2. Config should be centralized. Engineers shouldn't be hunting around to look for configs
  5. All features must have feature flag in config.
    1. The feature flag can be used to disable features in per region basis
    2. Once a feature flag is disabled the feature should cause no impact to the system
  6. Try to make sure your system works on a single boxThis makes dev-test significantly easier. Mocking auxiliary systems is OK
  7. Never delete things immediately
    1. Don't delete anything instantaneously, especially data. Tombstone deleted data away from user view
    2. Keep data, metadata, machines around for a garbage collector to periodically delete at configurable duration.
  8. Strive to be event driven
    1. Polling is bad as the primary mechanism
    2. Start with event driven approach and have fallback polling
  9. Have good unit tests.
    1. All functionality needs to ship with tests in the same PR (no test PR later)
    2. Unit test tests functionality of units (e.g. class/modules)
    3. They do not have to test every internal functions. Do not write tests for tests' sake. If test covers all scenarios exposed by an unit, it is OK to push back on comments like "test all methods".
    4. Think what does your unit implement and can the test validate the unit is working after any changes to it
    5. Similarly if you add a reference to an unit from outside and depend on a behavior consider adding a test to the callee so that changes to that unit doesn’t break your requirements
    6. Unit test should never call out from dev box, they should be local tests only
    7. Unit test should not require other things to be spun up (e.g. local SQL server)
  10. Consider adding BVT to scenarios that cannot be tested in unit tests.
    E.g. stored procs need to run against real SqlDB deployed in a container during BVT, or test query routing that needs to run inside a web-server
  11. All required tests should be automatically run and not require humans to remember to run them
  12. Test in production via our INT/canary clusterSomethings simply cannot be tested on dev setup as they rely on real services to be up. For these consider testing in production over our INT infra.
    1. All merges are automatically deployed to our INT cluster
    2. Add runners to INT that simulate customer workloads.
    3. Add real lab devices or fake devices that test as much as possible. E.g. add fake snmp trap generator to test fluentd pipeline, have real blades that can be rebooted using our APIs periodically
    4. Bits are then deployed to Canary clusters where there are real devices being used for internal testing, certification. Bake bits in Canary!
  13. All features should have measurable KPIs and metrics.
    1. You must add metrics against new features. Metrics should tell how well your feature is working, if your feature stops working or if any anomaly is observed
    2. Do not skimp on metrics, we can filter metrics on the backend rather than not having them fired
  14. Copious logging is required.
    1. Process should never fail silently
    2. You must add logs for both success and failure paths. Err on the side of too much logging
  15. Do not rely on text logs to catch production issues.
    1. You cannot rely on too many error logs from a container to catch issues. Have metrics instead (see above)
    2. Logs are a way to root-cause and debug and not catch issues
  16. Consider on-call for all development
    1. Ensure you have metrics and logs
    2. Ensure you write good documentation that anyone in the team can understand without tons of context
    3. Add alerts with direct link to TSGs
    4. Add actionable alerts where the on-call can quickly mitigate
    5. On-call should be able to turn off specific features in case it is causing problems in production
    6. All individual merges can be rolled back. Since you cannot control when code snap for production happens the PRs should be such that it can be individually rolled back

Saturday, February 15, 2020

The C Word - Part 2



Our confrontation with Lymphoma started in 2011, when the C word entered our life and my wife got diagnosed with Stage 4 Hodgkins sclerosing lymphoma. We have fought through and continue to do so. Even through she is in remission now, the shadow of Cancer still hangs over.

We had just moved across the world from India to the US in 2010 and had no family and very few friends around. We had to mostly duke it out ourselves with little support. We were able to get the best treatment available in the world through the Fred Hutch and Seattle Cancer Care Alliance. However, we do realize not everyone is fortunate to be able to do so.

Our daughter has decided to do her part now and raise funds through the Lymphoma and Leukemia Society. If you'd like to help her  please head to

https://events.lls.org/wa/SOYSeattle20/pbasu



Sunday, January 19, 2020

Chobi - Face Detection Based Static Image Gallery Generator

Over the holidays I created a simple static photo gallery generator that I named chobi. The sources are in https://github.com/abhinababasu/chobi.

Given a source folder of photos, chobi will generate a destination folder containing the original photos, generated thumbnails, css stylesheets, scripts and html files which constitute a website displaying those photos as follows


The Problem

While building chobi and also when I was looking into similar online tools, I kept hitting a major issue and that prompted further work and this post.

You see thumbnail generation from image has a major problem. I needed the gallery generator to create square thumbnails for the image strip shown at the bottom of the page. However, the generated thumbnails would simply be either from the center or some other arbitrary location. This meant that the thumbnails would cut off at weird places.

Consider the following image.

If I create a thumbnail from a tool without knowing where the face is in the image, it will generate something like
Obviously that doesn't work. So using my vanilla generator I got a website with all sorts of similar head chopped off thumbnails (marked in Red)


The Solution

Chobi uses face-detection to ensure that does not happen and the face is always fully present in the generated thumbnail. Consider the same thumbnail as above but now with face-detection

Another example of an original image and then generated thumbnail first without and then with face-detection.


With this face-detection plugged in chobi generates a much better web-site, with almost no photo cropped where it shouldn't be.

Sources

  1. Chobi sources are at https://github.com/abhinababasu/chobi
  2. It uses the face-detected thumbnail generator which I wrote at https://github.com/abhinababasu/facethumbnail
  3. That is in turn is based out of a face detection library pigo written fully in go

Sample

Checkout a gallery build using chobi at 

Monday, January 13, 2020

How to run Windows 7 after end of support





Windows 7 end of support is upon us in 1 more day (1/14/2020). This post tries to answer the question on whether you can safely continue to run it. The short answer is that you can't, atleast if it is connected to the outside in some form.

However, I have a friend back in India who has some software that he relies on and he can't run it on modern Windows. So when I was answering his question on how he can run it, I thought I'd write it up in the blog as well.

This post outlines how you can run Windows 7 in virtual machine running on Microsoft Hyper-visor on a windows 10 machine. The process also uses checkpoints to reset the VM back to the old state each time. This ensures that even if something malicious gets hold of the system, you can simple go back to the pristine state you started with.

Pre-requisite

Obviously you need a computer capable of running Hyper-V. For my purpose I am using a Windows 10 Professional machine. You should also have more than enough CPU cores and memory to run Windows 7 in that machine. I recommend atleast 4 cores and 8GB memory so that you can give half of that to the Windows 7 VM and keep the rest for a functional host PC.

Get hold of Windows 7 ISO or download it from https://www.microsoft.com/en-us/software-download/windows7.

Then visit the system requirement https://support.microsoft.com/en-us/help/10737/windows-7-system-requirements. I decided to give roughly twice the resources as the requirements to create my VM.

Setup VM

Hit windows-key and type hyper-v to launch. Then start creating a VM by clicking New and then Virtual machine.



Choose the following Generation 1



I decided to then give it 4GB memory and 2 CPU cores

Chose to create a 40GB OS disk




Then install from bootable CD and pointed the location of the image file to the downloaded ISO image

Click through next to end and finish the creation wizard. Then right click on the newly created VM and choose "Connect".



Install Windows 7


At this point if everything went well the VM has booted off the installation ISO and we are on the following screen. Choose "Clean install" and proceed through the installation wizard.




Finally installation starts.


A reboot later we have Windows 7 starting up!

Created a username, password


Finally booted into Windows 7 and here's my website displayed in Internet Explorer


Secure by Checkpoint

Even though we have booted into Windows 7 soon this will be a totally unsupported OS, that means no security updates. This is a dangerous system to keep open to the internet. I recommend never doing that!! Also to be doubly sure, we will create a checkpoint. What that does is it creates a snapshot of the memory and the disk. So in case something malicious lands in this VM, we can go back to the pristine state when the snapshot was created and hence rollback any changes made by the virus or malware.

To create a  checkpoint right click on the VM in hyper-v manager and choose Checkpoint


You can see the checkpoints created in the Hyper-V Manager.



Lets make a change to the Windows 7 VM by creating a file named "Howdy I am created.txt" on the desktop.


Since the checkpoint was created before creating the file, I can revert back to the checkpoint by right-click on the checkpoint and choosing Apply.

After applying the checkpoint when I go back into the VM, the created file is all gone!!!

Finally

This is a hack at best and not recommended. However, if for some applications or other need where you "have" to run Windows 7, this can be an option.

Monday, January 06, 2020

Chobi - A static photo gallery generator



I love using Microsoft Todo and before taking time off in December I create a holiday todo list. I tend to be at home with the family and do bunch of projects around. I try to ensure that I am not doing only work related projects during that time, so put in a ceiling of half a week for coding related stuff. Other Todos generally involves carpentry, DIY home projects, yardwork, cleaning etc.

One of the projects was to update my online photo gallery. Now being a programmer I made it way more complicated than I should've. I decided to code up a minimalistic program to generate static photogallery out of folders of images I export out of Adobe Lightroom. As I mentioned above one of the requirement was to finish it in around 3 days.

I am happy to share that I have the project done and the sources are available at https://github.com/abhinababasu/chobi. It took me about 3 days and most of the time was spent figuring out UI stuff which I rarely do and pondering about which photos to put in the gallery.

The code is in go and it does the following

  1. It iterates through a folder of images (sub-dir not supported yet) and copies the images to a destination
  2. Also places thumbnails (configurable size) into the destination
  3. There is a template html that it modifies to display those images
  4. It also uses some client side script to 
    1. Randomize the image order
    2. Show a carousel of the images
    3. A thumbnail gallery at the bottom
    4. Automated photo rotation
Here's a screenshot of the sample landscape gallery.


Since this was very time-bound project there are tons to stuff left to do, some basic bugs abound as well. But I decided to timeout on the effort for now and revisit again hopefully in the spring.

Wednesday, October 09, 2019

CAYL - Code as you like day


Building an enterprise grade distributed service is like trying to fix and improve a car while driving it at high speed down the free-way. Engineering debt accumulates fast and engineers in the team yearn for the time to get to them. A common complaint is also that we need more time to tinker with cool features and tech to learn and experiment.

An approach many companies take is the big hackathon events. Even though they have their place, I think those are mostly for PR and getting eye candy. Which exec doesn’t want to show the world their company creates AI powered blockchain running on quantum computer in just a 3 day hackathon.

This is where CAYL comes in. CAYL or “Code As You Like” is named loosely on “go as you like” event I experienced as a student in India. In a lot of uniform based schools in Kolkata, it is common to have a go as you like day, where kids dress up however they want.

Even though we call it code as you like, it has evolved beyond coding. One of our extended Program Management team has also picked this up and call it the WAYL (Work as you like day). This is what we have set aside in our group calendar for this event.


“code as you like day” is a reserve date every month (first Monday of the month) where we get to code/document or learn something on our own.
There will be no scheduled work items and no SCRUM
We simply do stuff we want to do. Examples include but not limited to
  1. Solve a pet peeve (e.g. fix a bug that is not scheduled but you really want to get done)
  2. A cool feature
  3. Learn something related to the project that you always wanted to figure out (how do we use fluentd to process events, what is helm)
  4. Learn something technical (how does go channels work, go assembly code)
  5. Shadow someone from a sibling team and learn what they are working on
We can stay late and get things done (you totally do not have to do that) and there will be pizza or ice-cream.
One requirement is that you *have* to present the next day, whatever you did.  5 minutes each

I would say we have had great success with it. We have had CAYL projects all over the spectrum
  1. Speed up build system and just make building easier
  2. ML Vision device that can tell you which bin trash needs to go in (e.g. if it is compostable)
  3. Better BVT system and cross porting it to work on our Macs
  4. Pet peeves like make function naming more uniform, remove TODO from code, spelling/grammar  etc.
  5. Better logging and error handling
  6. Fix SQL resiliency issues
  7. Move some of our older custom management VMs move to AKS
  8. Bring in gomock, go vet, static checking
  9. 3D game where mommy penguin gets fish for her babies and learns to be optimal using machine learning
  10. Experiment with Prometheus
  11. A dev spent a day shadowing dev from another team to learn the cool tech they are using etc.
We just finished our CAYL yesterday and one of my CAYL items was to write a blog about it. So it’s fitting that I am hitting publish on this blog, as I sit in the CAYL presentation while eating Kale chips

Monday, September 30, 2019

Azure Dedicated

I remember a discussion with a group of friends around 8 years back. Microsoft was in it’s early days of becoming a leader in the cloud. Those friends, all techies in the Seattle area had varying expectation on how it would work out. Many thought that a full blown move was few decades away because their experience indicated that all big companies ran on stack that was very old and simply couldn’t be moved to the cloud any time soon.

Waves of workloads have been since moving to the cloud. A new brew of startups were cloud native from the start and they were the first to use the power of cloud. Many large and small enterprises had already virtualized workloads and they moved as well. Some moved their new workloads (green-field), some even followed lift-n-shift with some modifications (brown-field) into the cloud.

However, a class of large enterprises were stuck in their data centers. They wanted to use the power of the cloud, they wanted to use IoT integration, Machine-learning and the capability of elastic growth of their applications, but the center of their systems were running on some stack that did not run in the standard virtualization offered in the cloud. These enterprises said that if they cannot move those workloads into the cloud, they would need to keep the lights on in their data-centers and moving some peripheral workloads simply did not make sense.

This is where Azure Dedicated and we come into the picture.

SAP HANA Large Instance

For some of these customers that #$%#@ is SAP HANA in-memory DB on a single machine with 768 vCPUs and 24 terabytes of ram (yup) and we have them covered. Some wanted to scale those out to 60 terabytes in memory, we have them covered too with our bare-metal machines running in Azure. See SAP HANA Large Instances on Azure. They wanted to then expand their applications elastically using VMs running on Azure with sub 1 ms latency to those baremetal DB machines, we have that working too.

We started our journey in this area with this workload. Now we have evolved into our own little organization in Azure called Azure Dedicated and also support the following workloads.

Azure VMware Solutions

Some customers wanted to run their VMware workloads and we have two offers for them, see more about Azure VMware Solution by CloudSimple and Virtustream here

Hardware Security Modules

In partnership with other teams in Azure we support HSM, which are standard cryptographic appliances powering say financial institutions.

Cray Supercomputer?

So you need to simulate something or do ML on tens of thousands of cores, we have Cray super computers running inside Azure for that!!

Azure NetApp Files

Working closely with the storage team we deliver demanding file based workloads running on Azure NetApp files

SkyTap

In partnership with SkyTap we provide IBM Power workloads on Azure to customers.

What next?

We know there are more such anchors holding back enterprises from moving into the cloud. If you have some ideas on what we should take on next, please let me know in the comments!

Wednesday, October 10, 2018

SAP HANA Large Instances on Azure

image

Over the past year I have been working to light up bare-metal machines on Azure Cloud. These are specialized bare-metal machines that have extremely high amount of RAM and CPU and in this particular case, purpose built to run SAP HANA in-memory database. We call them the HANA Large Instance and they come certified by SAP (see list here).

So why bare-metal? They are huge high performance machines that goes all the way up to 24TB RAM (yup) and 960 CPU threads. They are purpose built for HANA in memory database and have the right CPU/Memory ratio and high performance storage to run demanding OLTP + OLAP workloads. Imagine a bank being able to load every credit card transaction in the past 5 year and be able to do analytics including fraud detection on a new transaction in a few seconds, or track the flow of commodities from the worlds largest warehouses to millions of stores and 100s of millions of customers. These machines come with 99.99% SLA and can be reserved by customers across the world in US-East, US-West, Japan-East, Japan-West, Europe-West, Europe-North, Australia-SouthEast, Australia-East to SAP HANA workloads.

In SAP TechEd and SAPPHIRE I demoed bare-metal HLI machines with standard Azure Portal integration. Right now customers can see their HLI machines in the portal and coming soon even reboot them from the portal.

Portal preview

Click on the screenshot below to see a recorded video on how the Hana Large Instances are visible on the Azure portal and also how customers can raise support requests from the portal.

Portal screenshot

Reboot Demo

This is something we are working on right now and will be available soon. Click on the screenshot below to see the video of a HANA Large instance being rebooted from the portal directly.image

Getting Access

Customers with HLI blades can run the following CLI command to register our HANA Resource Provider

az provider register --namespace Microsoft.HanaOnAzure

Or alternatively using the http://portal.azure.com. Go to your subscription that has HANA Large Instances, select “Resource Providers”, type “Hana” in the search box. Click on register.

image

Questions?

Send them to sap-hana@microsoft.com

Friday, June 01, 2018

Deploy Cloud Dev Box on Azure with Terraform

image

Summary: See https://github.com/abhinababasu/cloudbox for a terraform based solution to deploy VMs in Azure with full remote desktop access.

Now the longer form :). I have blogged in the past about how to setup a Ubuntu desktop on Azure that you can RDP (remote desktop) into. Over the past few months I have moved onto doing most of my development work exclusively on cloud VM and I love having full desktop experience on my customized “Cloud Dev box”. I RDP into it from my dev box at work, Surface Pro, secure laptop etc.

I wanted to ensure that I can treat the box as cattle and not pet. So I came up with a terraform based scripts to bring up these cloud dev boxes. I have also shared them with my team in Microsoft and few devs are already using it. I hope it will be useful to you as well incase you want something like that. All code is at https://github.com/abhinababasu/cloudbox

A few things about the main terraform script at https://github.com/abhinababasu/cloudbox/blob/master/cloudVM.tf 

  1. It is a good security practice is to ensure that your VM is locked down. I use Azure NSG rules to ensure that the VM denies in-bound traffic from Internet. I accept parameters to the script where you can give IP ranges which will then be opened up. This ensures that your VM is accessible from only safe locations, in my case those are IP ranges of Microsoft (from work) and my home IP address.
  2. While you can use just the TF file and setup script I have a driver script at https://github.com/abhinababasu/cloudbox/blob/master/cloudshelldeploy.sh that you might find useful
  3. Once the VM is created I use remote execution feature of terraform to run the script in https://github.com/abhinababasu/cloudbox/blob/master/cloudVMsetup.sh to install various software that I need including Ubuntu desktop and xrdp for remote desktop. This takes around 10 minutes atleast
  4. By default Standard_F8s machine is used, but that can be overridden with larger sizes (eg. Standard_F16s). I have found machines smaller than that doesn’t provide adequate performance. Note: You will incur costs for running these biggish VMs

Pre-requisite

Obviously you need terraform installed. I think the whole system works really well if you launch from https://shell.azure.com because that way all the credential stuff is automatically handled, and cloud shell comes pre-installed with terraform.

If you want to run from any other dev box, you can need to have Azure CLI and terraform installed (use installterraform.sh script for it) . Then do the following where subsId is the subscriptionId under which you want the VM to run.

az login
az account set --subscription="<some subscription Id>"

While you can download the files from here and use it, you should be better of by customizing the cloudshelldeploy.sh script and then running it. I use the following to run

curl -O https://raw.githubusercontent.com/bonggeek/share/master/cloudbox/cloudshelldeploy.sh
chmod +x cloudshelldeploy.sh
./cloudshelldeploy.sh abhinab <password>
image

Finally

image

Now you can use a rdp client like mstsc to loginto the machine.

NOTE: In my experience 1080p resolution works well, 4K lags too much to be useful. Since mstsc default is full-screen be careful if you are working on hi-res display and explicitly use 1080p resolution.

There I am logged into my cloud VM.

image

Wednesday, May 16, 2018

Getting Azure Cloud Location

image

I have had got some ask on how to discover which Azure cloud the current system is running on. Basically you want to figure out if you are running something in the Azure public cloud or in one of the specialized government clouds.

Unfortunately this is not currently available in Instance Metadata Service. However, it can be found out using a an additional call. The basic logic is to get the current location over IMDS and then call Azure Management API to see which cloud that location is present in.

Sample script can be found at https://github.com/bonggeek/share/blob/master/azlocation.sh

#!/bin/bash
locations=`curl -s -H Metadata:True "http://169.254.169.254/metadata/instance/compute/location?format=text&api-version=2017-04-02"`

# Test regions
#locations="indiasouth"
#locations="usgovsouthcentral"
#locations="chinaeast"
#locations="germanaycentral"

endpoints=`curl -s https://management.azure.com/metadata/endpoints?api-version=2017-12-01` 
publicLocations=`echo $endpoints | jq .cloudEndpoint.public.locations[]`

if grep -q $locations <<< $publicLocations; then
    echo "PUBLIC"
    exit 1
fi

chinaLocations=`echo $endpoints | jq .cloudEndpoint.chinaCloud.locations[]`
if grep -q $locations <<< $chinaLocations; then
    echo "CHINA"
    exit 2
fi

usGovLocations=`echo $endpoints | jq .cloudEndpoint.usGovCloud.locations[]`
if grep -q $locations <<< $usGovLocations; then
    echo "US GOV"
    exit 3
fi

germanLocations=`echo $endpoints | jq .cloudEndpoint.germanCloud.locations[]`
if grep -q $locations <<< $germanLocations; then
    echo "GERMAN"
    exit 4
fi

echo "Unknown'
exit 0

This is what I see for my VM

image

Monday, March 26, 2018

Azure Serial Console

28452510890_0b229a6726_z

My team just announced the public preview of Azure Serial console. This has been a consistent ask from customers who want to recover VMs in the cloud.  Go to your VM in http://portal.azure.com and then click on the Serial Console button

image

This opens a direct serial console connection to your VM. It is not required to have the VM open to internet. This is amazing to diagnose VM issues. E.g. if you are not able to SSH to the VM for some reason (blocked port, bad config change, busted boot config). You drop into the serial console and interact with your machine. Cool or what!!

Capture2

Capture3

To show you the difference between a SSH connection and serial console, this is my machine booting up!!

image

Friday, October 06, 2017

Remote Ubuntu desktop on Azure

image

For the past many months I have moved to have my dev boxes on the cloud. I am happily using a monster Windows VM and an utility Ubuntu desktop in the cloud. I realized after talking to a few people that they don’t realize how easy it is to setup Linux remote desktop in Azure cloud. Here goes the steps.

For VM configuration I needed a small sized VM with large enough network bandwidth to support remoting. Unfortunately on Azure you cannot choose networking bandwidth but rather all the VMs in a box gets networking bandwidth proportional to the number of cores they have. So I just created a VM based on the “Standard DS4 v2 Promo (8 vcpus, 28 GB memory)” and connected it to Microsoft ExpressRoute. If you are ok with public IP you skip setting the express route and ensure your VM has a public IP.

Then went to the Portal and enabled RDP. For that in the portal choose VM –> Networking and add rule to enable RDP.

image

Finally I sshed into my VM with

c:\bin\putty.exe abhinaba@AbhiUbuntu

Time to now install a bunch of tools

sudo apt-get update
sudo apt-get install xrdp
sudo apt-get install ubuntu-desktop
sudo apt-get install xfce4
sudo apt-get update

Setup xsession

echo xfce4-session > ~/.xsession

Open the following file

sudo gvim /etc/xrdp/startwm.sh

Set the content to

#!/bin/sh

if [ -r /etc/default/locale ]; then
    . /etc/default/locale
    export LANG LANGUAGE
fi

startxfce4

start the xrdp service

sudo /etc/init.d/xrdp start

And then from my windows machine mstsc /v:machine_ip. I am presented with the login screen

image

then I have full Ubuntu desktop on Azure :)

image