Friday, October 06, 2017

Remote Ubuntu desktop on Azure

image

For the past many months I have moved to have my dev boxes on the cloud. I am happily using a monster Windows VM and an utility Ubuntu desktop in the cloud. I realized after talking to a few people that they don’t realize how easy it is to setup Linux remote desktop in Azure cloud. Here goes the steps.

For VM configuration I needed a small sized VM with large enough network bandwidth to support remoting. Unfortunately on Azure you cannot choose networking bandwidth but rather all the VMs in a box gets networking bandwidth proportional to the number of cores they have. So I just created a VM based on the “Standard DS4 v2 Promo (8 vcpus, 28 GB memory)” and connected it to Microsoft ExpressRoute. If you are ok with public IP you skip setting the express route and ensure your VM has a public IP.

Then went to the Portal and enabled RDP. For that in the portal choose VM –> Networking and add rule to enable RDP.

image

Finally I sshed into my VM with

c:\bin\putty.exe abhinaba@AbhiUbuntu

Time to now install a bunch of tools

sudo apt-get update
sudo apt-get install xrdp
sudo apt-get install ubuntu-desktop
sudo apt-get install xfce4
sudo apt-get update

Setup xsession

echo xfce4-session > ~/.xsession

Open the following file

sudo gvim /etc/xrdp/startwm.sh

Set the content to

#!/bin/sh

if [ -r /etc/default/locale ]; then
    . /etc/default/locale
    export LANG LANGUAGE
fi

startxfce4

start the xrdp service

sudo /etc/init.d/xrdp start

And then from my windows machine mstsc /v:machine_ip. I am presented with the login screen

image

then I have full Ubuntu desktop on Azure :)

image

Thursday, September 14, 2017

Distributed Telemetry at Scale

image

In Designing Azure Metadata Service I elaborated on how we run Azure Instance Metadata Service (IMDS) at massive scale. Running at this scale in 36 regions (at the time of writing) of the world, on incredible number of machines is a hard problem to solve in terms of monitoring and collecting telemetry. Unlike other centralized services it is not as simple as connecting it to a single telemetry pipeline and get done with it.

We need to ensure that

  1. We do not collect too much data (cost/latency)
  2. We do not collect too less (hard to debug issues)
  3. Data collection is fast
  4. We are able to drill down into specific issues and areas of problem
  5. Do all of the above when running in 36 regions of the world
  6. Continue to do all of the above as Azure continues it’s phenomenal growth

To meet all the goals we take a three pronged approach. We break out telemetry to 3 paths

  1. Hot-path: Minimal numeric data that can be uploaded super fast (few second delayed) that we can use for monitoring our service and alert in case anomaly is detected
  2. Warm-path: More richer textual data that are few minute delayed and we can use this to drill down into issues remotely in case hot-path flagged an issue
  3. Cold-path: This gives us full fidelity data to monitor

image

Hot-Path

Even though we run on so many places we want to ensure that we have near real time alerting and monitoring and can quickly catch if something bad is happening. For that we use performance and functionality counters. These counters measure the type of response we are giving back, their latencies, data size etc. All of them are numeric and track each call in progress. We then have high speed uploaders in each machine with backends that can collect these. Then we attach these counters with alerts at per cluster level. We can catch latency issues, failures with few seconds delays. These counters only tell us if something is going bad and not why they are doing so. We have 10s of such numeric high speed telemetry coming from each IMDS instance.

Here’s a snapshot of one such counter in our dashboard showing latency at 90th percentile.

image

In addition we have external dial-tone services that keep pinging IMDS to ensure the services are up everywhere. If there is no response then likely there has been some crash or other deadlocks. We measure the dial-tone as part of our up-time and also have alerts tied to this.

image

Warm-Path

If hot-path counter driven alerts tell us something has gone wrong and an on-call engineer is awaken, the next steps of business is to quickly figure out what’s going on. For that we use our warm-path pipeline. This pipeline uploads informational and error level logging. Due to volume the data is delayed by few minutes. The query granularity can also slow down fetching them. So one of the focus of the hot-path counters is that it can narrow down the location of problem to cluster level/machine level.

The alert directly filters the logs being uploaded to a cluster/machine and brings up all logs. In most cases they are sufficient for us to detect issues. In case that doesn’t work we need to go into the detailed logs.

Cold-Path

Every line of logs (error/info/verbose) our service creates is stored locally on the machines with a certain retention policies. We have built tools so that given an alert an engineer can run a command from his dev box to fetch the log directly from that machine, wherever in the world the machine with the log exists. For hard to debug issues this is the last recourse.

However, more cooler is that we use our CosmosDB offering as a document store and store all error and info logs into that. This ensures the logs remain query-able for a long time (months) for reporting and analysis. We also run jobs that read the logs from these cosmos streams and then shove it into Kusto as structured data. Kusto is also available to users with the more fancier name of Azure Application Insights Analytics. I was floored with the insight we can get with this pipeline. We upload close to 8 terabytes of log data a day into cosmos and still able to query all data over months in a few seconds

Here’s a quick peek into seeing what kind of responses IMDS is handing out.

image

A look into the kinds of queries coming in.

image

Distribution of IMDS version being asked for.

image

We can extract patterns from the logs, run regex matching and all sorts of cool filters and at the same time be able to render data across our fleet in seconds.

Monday, September 11, 2017

Designing Azure Metadata Service

image

Some time back we have announced the general availability of Azure Instance Metadata Service (IMDS). IMDS has been designed to deliver instance metadata information into every IaaS virtual machines running on Azure over a REST endpoint. IMDS works as a data aggregation service and fetches data from various sources and surfaces it to the VM in a consistent manner. Some of the data can already be on the physical machine running the VM and others could be inside other regional service which are remote from the machine.

As you can imagine the scale of usage of this service is immense and spans across globe (at the time of writing 36 regions across the world) and Azure usage doubles YoY. So any design for IMDS has to be highly scalable and built for future growth.

We had many options to build this service both based on the various reliability parameters we wanted to hit as well as in terms of engineering ease.

image

Given a typical cloud hierarchical layout, you can imagine such a service to be built in any one of the following ways

  1. Build it like any other cloud service that runs on its own IaaS or PaaS infrastructure, with load-balancers, auto-scaling, mechanisms for distributing across regions, sharding etc.
  2. Dedicate machines in clusters or data centers that run this service locally
  3. Run micro-services directly in the physical machines that host the VMs

Initially building a cross region managed service seems like a simpler choice. Pick up any of the standard REST stack, deploy using any of the many deployment models available in Azure and go with that. With auto-scaling, load balancers it should just work and scale.

Like with any distributed systems we looked into our CAP model.

  1. Consistency: We could live with a more relaxed eventual consistency model for metadata. You can update the metadata of a virtual machines by making changes to it in the portal or using Azure CLI and eventually the virtual machine gets this last updated value
  2. Availability: The data needs to be highly available because various key pieces in the azure internal stack takes dependency on this metadata along with customer code running inside the VM
  3. Partition: The network is highly partitioned as is evident from the diagram above

Metadata of virtual machines is updated less frequently, however is used heavily across the stack (reads are much more common than updates). We needed to guarantee very high availability over a very highly partitioned infrastructure. We chose to optimize on partition tolerance and availability with eventual consistency. With that having a regional service was immediately discarded because it is not possible to provide high enough availability with that model.

Coupled with the above requirements and our existing engineering investments we chose to go with approach #3 of running IMDS as a micro service on each Azure host machine.

  1. Data is fetched and cached on every machine, which means that data is lower in liveliness but is always eventually consistent as data gets pushed into those machines. Varying levels of liveliness exists based on what specific source the metadata is fetched from. Some metadata anyway needs to be pushed into the machine before it is applied and is hence always live, others like say VM tags has lower liveliness guarantee
  2. Since the data is served from the same physical machine, the call doesn’t leave the machine at all and we can provide very high availability. Other than ongoing software deployments and system errors the data is always available. There is no network partition.
  3. There is no need to further balance load or shard out data because the data is on the machine where it is being served. The solution automatically scales with Azure because more customers means more Azure machines running them and more placed IMDS can run on
  4. However, deploying and telemetry at this scale is tough. Imagine how large Azure deployment is and consider deploying and updating a service that runs everywhere on it.

It’s really fun working on problems on this scale and it’s always a learning experience. I look forward to share more details on my blog here.

Friday, September 08, 2017

Azure Instance Metadata Service

image

One of the projects in Microsoft Azure that I have been involved with is the instance metadata service (IMDS) for Azure. It’s a massively distributed service running on Azure that among other things brings metadata information to IaaS virtual machines running on azure.

IMDS is documented at https://aka.ms/azureimds. Given that the API is already well documented at that location and like all services will evolve to encompass more scenarios in the future, I would not repeat that effort here. Rather I wanted to cover the background behind some of the decisions in the API design.

First lets look at the API itself and break it down to it’s essential elements

D:\>curl -H Metadata:True "http://169.254.169.254/metadata/instance?
api-version=2017-04-02&format=text" compute/ network/

Metadata API is REST based and available over a GET call at the non-routable IP address of 169.254.169.254. This IP is reserved in Azure for some time now and is also used for similar reasons in AWS. All calls to this API has to have the header Metadata:True. This ensures that the caller is not blindly forwarding an external call it received but is rather deliberately accessing IMDS.

All metadata is rooted under /metadata/instance. In the future other kinds of publicly available metadata could be made available under /metadata.

The Api-versions are documented in the link shared above and the caller needs to explicitly ask for a version, e.g. 2017-04-02. Interestingly it was initially named 2017-04-01, but someone in our team thought that it’s not a great idea to ship the first version of an API based on April fools day.

We did consider supporting something like “latest”, but experience tells us that it leads to fragile code. As versions will be updated, invariably some user’s scripts/code depending on latest to be of some form breaks. Moreover, it’s hard from our side to also gauge what versions are being used in the wild as users may just use latest but have implicit dependency on some of the metadata values.

We support two formats, JSON and text. On using JSON you can fetch the entire metadata and parse it on your side. A sample from Powershell screen shot is shared below.

image

However, we wanted to support a simple text based approach as well. It’s easiest to imagine the metadata as a DOM (document object model) or even a directory. On asking for text format at any level (the root being /metadata/instance) the immediate child data is returned. In the sample above the top level compute and network is returned. They are each in a given line and if that line ends with a slash, it indicates that the data has more children. Since compute/ was returned we can fetch it’s children by the following.

D:\>curl -H Metadata:True "http://169.254.169.254/metadata/instance/compute?api-version=2017-04-02
&format=text" location name offer osType platformFaultDomain platformUpdateDomain publisher sku version vmId vmSize

None of them have a “/” suffix and hence they are all leaf level data. E.g. we can fetch the unique id of the VM and the operating system type with the following calls

D:\>curl -H Metadata:True "http://169.254.169.254/metadata/instance/compute/vmId?
api-version=2017-04-02&format=text" c060492e-65e0-40a2-a7d2-b2a597c50343
D:\>curl -H Metadata:True "http://169.254.169.254/metadata/instance/compute/osType?
api-version=2017-04-02&format=text" Windows

The entire idea being that the API is usable from callers like bash-scripts or other cases that doesn’t want or need to pull in a JSON parser. The following bash script pulls the vmId from IMDS and displays it

vmid=$(curl -H Metadata:True "http://169.254.169.254/metadata/instance/compute/vmId?
api-version=2017-04-02&format=text" 2>/dev/null) echo $vmid

I have shared a few samples of using IMDS at https://github.com/bonggeek/Samples/tree/master/imds

Do share feedback and requests using the URL https://aka.ms/azureimds

Monday, November 14, 2016

Magic Mirror

I just implemented a MagicMirror for our home. Video footage of it working.

Basically I used the open source http://github.com/MichMich/MagicMirror. I had to do a few changes. I added a hideall module  so that everything is hidden and comes visible on detecting motion. My sources are at https://github.com/bonggeek/hideall

This is the first prototype working on desk monitor with a 1mm two-way mirror held in front of it.

20161101_014220

It was evident that the thin 1mm acrylic mirror is no good, because it bent easily giving weird distorted images. I moved to a 3mm 36” by 18” mirror and started working on a good sturdy frame.

20161105_154428_Richtone(HDR)20161105_154503_Richtone(HDR)

20161105_160518

I used 3x1 wood for the size which is actually 2.5 x 0.75 inches. On that I placed a face-frame.

I had a smaller 27” old monitor and decided to just use that. I mounted and braced the monitor with L shaped brackets. So it is easy to take out as well as hold the monitor firmly in place.

20161107_163647

Final pictures

IMG_8263IMG_8252IMG_8253

Thursday, June 23, 2016

Caribbean Vacation planning in Summer

13497812_10153551994327274_9173203905237221898_o

I just came back from a vacation in Aruba. A friend asked me why Aruba as he thought is was an unusual choice.

image

I was trying to explain how my choice had a large data driven approach. Then I thought I’d share how I do it.

Why Now

First of lets get the constraints out of the way. Because choosing between “where to go at this time” vs “when to go to place X” needs different data crunching. I had to go now, the time is not a choice and is governed by the fact that I had manageable work load, it is summer and school vacations are on(I cannot go when it is not), I cannot go later during summer because of other engagements. Also I wanted to go relax in a Caribbean destination. So the question came down to where in the Caribbean and I used historical data to make the choice. And boy did that work out!!

Weather Data

First of it is hurricane season in the Caribbean so my first quest was to find a place where I am least likely to be hit by one. For that I headed onto the NOAA site https://coast.noaa.gov/hurricanes/ which maps all hurricanes at a given place for the last 100 or more years. Choosing Bahamas and then all hurricanes only (not even storms) for June/July shows me the following. So you can make it out that it is a super bad idea.

image

I did the same search for other popular destinations in that area. E.g. Aruba for the last 100 years, shows not a single one hit this island.

image

Now that we know a hurricane doesn’t generally hit this place, lets see precipitation data.

image

So it’s very unlikely to rain either. Wow I am sold already.

Other Data

Obviously weather is not the only criteria. I did similar search with crime rates, prevalence of diseases etc. And then when I narrowed down to the area around the ABC islands (Aruba, Curacao, Bonaire), I finally made my choice on Aruba based on the kind of prices I got on hotels.

Wednesday, May 04, 2016

Customizing Windows Command Shell For Git session

23713373012_50e0f75403_z

Old habits die hard. From my very first days as developer in .NET and Visual Studio , I am used to have my windows cmd shell title always show the branch I am working on and have a different color for each branch. That way when I have 15 different command window open, I always know which is open where. Unfortunately when I recently moved to GIT, I forgot to customize that and made the mistake of making changes and checking in code into the wrong branch. So I whipped up a simple batch script to fix that.

@ECHO OFF
git checkout %1
for /f "delims=" %%i in ('git rev-parse --abbrev-ref HEAD') do set BRANCH=%%i

@ECHO.
title %BRANCH%

REM Aqua for branch Foo
if "%BRANCH%" == "Foo" color 3F 

REM Red for branch bar
if "%BRANCH%" == "Bar" color 4F

REM Blue
if "%BRANCH%" == "dev" color 1F

I saved the above as co.bat (short for checkout) and now I switch branches using the co <branch-name>.

You can see all the color options by running color /? in your command window.

image

or

image

Monday, February 22, 2016

Identifying your Arduino board from code

For my IoT project I needed to write code slightly differently for specific Arduino boards. E.g. for Arduino UNO I wanted to use Serial to talk to ESP8266 and for Arduino Mega wanted to use Serial1. So basically I wanted to use Board specific #defines

#ifdef MEGA
    #define SERIAL Serial1
#elif UNO
    #define SERIAL Serial
#endif

SERIAL.println("Something")

For that I needed to get board specific #defines for each of the board options in the Arduino IDE, so that as I change the board type in the IDE, I automatically build code for that specific board.

image

That information is actually available in a file called board.txt inside your arduino install folder. For me it is G:\arduino-1.6.5\arduino-1.6.5-r5\hardware\arduino\avr\boards.txt. For each board there is a section inside that file and the relevant entries look something as follows

##############################################################

uno.name=Arduino/Genuino Uno

uno.vid.0=0x2341
uno.pid.0=0x0043
uno.vid.1=0x2341
uno.pid.1=0x0001
...
uno.build.board=AVR_UNO
uno.build.core=arduino
uno.build.variant=standard

The .board entry when prefixed by ARDUINO_ becomes the #define. I wrote a quick PowerShell routine to get all such entires. The code for it is in GitHub at https://github.com/bonggeek/Samples/blob/master/Arduino/ListBoard.ps1

$f = Get-ChildItem -Path $args[0] -Filter "boards.txt" -Recurse
foreach($file in $f)
{
    Write-Host "For file" $file.FullName
    
    foreach ($l in get-content $file.FullName) {
        if($l.Contains(".name")) {
            $b = $l.Split('=')[1];
        }

        if($l.Contains(".board")) {
            $s = [string]::Format("{0,-40}ARDUINO_{1}", $b, ($l.Split('=')[1]));
            Write-Host $s
        }

    }
}

Given the argument of root folder or Arduino install, you get the following.

PS C:\Users\abhinaba> D:\SkyDrive\bin\ListBoard.ps1 G:\arduino-1.6.5\
For file G:\arduino-1.6.5\arduino-1.6.5-r5\hardware\arduino\avr\boards.txt
Arduino Yún                            ARDUINO_AVR_YUN
Arduino/Genuino Uno                     ARDUINO_AVR_UNO
Arduino Duemilanove or Diecimila        ARDUINO_AVR_DUEMILANOVE
Arduino Nano                            ARDUINO_AVR_NANO
Arduino/Genuino Mega or Mega 2560       ARDUINO_AVR_MEGA2560
Arduino Mega ADK                        ARDUINO_AVR_ADK
Arduino Leonardo                        ARDUINO_AVR_LEONARDO
Arduino/Genuino Micro                   ARDUINO_AVR_MICRO
Arduino Esplora                         ARDUINO_AVR_ESPLORA
Arduino Mini                            ARDUINO_AVR_MINI
Arduino Ethernet                        ARDUINO_AVR_ETHERNET
Arduino Fio                             ARDUINO_AVR_FIO
Arduino BT                              ARDUINO_AVR_BT
LilyPad Arduino USB                     ARDUINO_AVR_LILYPAD_USB
LilyPad Arduino                         ARDUINO_AVR_LILYPAD
Arduino Pro or Pro Mini                 ARDUINO_AVR_PRO
Arduino NG or older                     ARDUINO_AVR_NG
Arduino Robot Control                   ARDUINO_AVR_ROBOT_CONTROL
Arduino Robot Motor                     ARDUINO_AVR_ROBOT_MOTOR
Arduino Gemma                           ARDUINO_AVR_GEMMA

So now I can use

#ifdef  ARDUINO_AVR_MEGA2560
    // Serial 1: 19 (RX) 18 (TX);
    #define SERIAL Serial1
#elif ARDUINO_AVR_NANO
    #define SERIAL Serial
#endif // ARDUINO_MEGA

Sunday, January 24, 2016

ESP8266 Wifi With Arduino Uno and Nano

If you are trying to add Wifi connectivity to an existing Arduino project or have serious aspirations for developing a Internet of Things (IoT) solution, Arduino + ESP8266 wifi module is one  of the top choices. Especially the Nano because it is super cheap (<$3) and is very small in size. Using some sort of web-server directly on ESP8266 (e.g. via Lua) doesn't cut it due to the lack of IO pins on ESP8266. You can get a full IoT node out at under $12 with a few sensors, Arduino Nano and a ESP9266 module (excluding the power supply).

Inspite of a plethora of posts online it turned out to be very hard for me to get this to combination to work. I spent atleast 3-4 days until I actually got this right. The main problem I see is that a lot of the solutions online are actually down-right incorrect, not-recommended or for other similar boards (e.g. Arduino Mega). Also there are a few gotchas that were not commonly called out. Before I start let me get all of those out of the way

  1. Arduino Uno/Nano is very different from say Mega which can supply more current and have different number of UART. The steps to make a Uno and Nano work is different from them.
  2. Power Supply
    1. ESP8266 is powered by 3.3V and NOT 5V. So you cannot have a common power supply between Arduino and ESP8266
    2. ESP8266 draws way more current (200mA) then it can be supplied by the 3.3v pin on the Uno/Nano. Don’t even try them, I don't buy anyone who claims to have done this. Maybe they have some other high power variant of Arduino (Mega??) that can do this.
    3. So you either use a 3.3v 1A power supply to ESP8266 with common ground with the 5V powering Arduino, or you use a step down 5v to 3.3v (e.g. like here).
  3. Arduino <-> ESP8266
    1. All the ESP8266 I bought  came with the UART serial IO speed (BAUD) set to 115200. Now the problem is that Uno/Nano has only one HW serial, which is set to be used for communicating with the PC over USB with which you are debugging. You can use any other two IO pins to talk to the ESP8266 using SoftwareSerial, but it does not support that high a BAUD speed. If you try 115200 to communicate with Arduino <-> ESP8266 you will get garbage. A lot of articles online show a setup with Arduino Mega which does have two HW serial IO using which you can easily get 115200 and more. So you need to dial the ESP8266 settings to move the communication speed to a more manageable BAUD of 9600
    2. Arduino IO pins have 5V and ESP8266 accepts 3.3 v (max 3.6). I have seen people directly connect the pins but you are over driving the ESP8266. If it doesn’t burn out immediately (the cheaper ones does), it will burn out soon. I suggest you use a voltage divider using simple resistor to have Arduino transmission (TX) drive ESP8266 receive (RX)
    3. For some strange reason D2/D3 pins on Arduino Nano didn’t work for me for the communicating with ESP8266. I have no explanation for this and it happened on two separate Nano. The Arduino would just read a whole bunch of garbage character. So I had to move to the pins 8/9.
    4. In spite of whatever I did, garbage characters would still come in sometimes. So I wrote a small filter code to ignore them

 

Things you need

  1. ESP8266
  2. Arduino Nano
  3. Power supply 5v and 3.3v
  4. Resistors 1K, 2.2K, 10K
  5. FTDI USB to serial TTL adapter. Link (optional, see below)

Setting up ESP8266

imageAs mentioned above I first set the ESP8266 BAUD rate to 9600. If yours is already 9600 then nothing to be done, if not you need to make the following connection

PC (USB) <-> FTDI <-> ESP8266

Then using specific AT commands from the PC set the 9600 BAUD rate on the ESP8266. I used the following circuit. Where the connections are as follows

FTDI TX –> Via voltage divider (to move 5v to ~3.3v) to ESP8266 RX (blue wire)
FTDI RX –> Directly to ESP8266 TX (green wire). A 3.3v on Nano I/0 pin will be considered as 1.
FTDI GND to common ground (black)

ESP8266 GND to common GND (black)
ESP8266 VCC to 3.3v (red)
ESP8266 CH_PD to 3.3v via a 10K  resistor (red)

Power supply GND to common GND

PC to FTDI USB.

One that is set bring up Arduino IDE and do the following using the menu

  1. Tools –> Port –>COM{n}. For me it was COM6
  2. Then Tools –> Serial monitor

In the serial monitor ensure you have the following set correctly. The BAUD should match the preset BAUD of your ESP8266. If you are not sure, use 115200 and type the command AT. If should return OK, if not try changing the BAUD, until you get that.

image

Then change the BAUD rate by using the following command, and you should get OK back

AT+CIOBAUD=9600

After that immediately change the BAUD rate in the serial monitor to be 9600 baud as well and issue a AT command. You should see OK. You are all set for the ESP8266.

Setting up Arduino Nano + ESP8266

This step should work for Uno as well. Essentially make the same circuit as above, but now instead of FTDI use an Arduino. I used pins 8 and 9 on Arduino for the RX and TX respectively.

image

 

Debugging and Setup WIFI

Even though I could easily run AT commands with the PC <->FTDI <-> ESP8266, I ran into various issues while doing the same programmatically in PC <->Arduino <-> ESP8266 setup. So I wrote the following very simple code to pass on commands I typed in the PC via the Arduino to the ESP8266 and reverse for outputs.

The code is at GitHub as https://github.com/bonggeek/Samples/blob/master/Arduino/SerialRepeater.ino

#include <SoftwareSerial.h>
SoftwareSerial softSerial(8, 9); // RX, TX

void setup() 
{
  uint32_t baud = 9600;
  Serial.begin(baud);
  softSerial.begin(baud);
  Serial.print("SETUP!! @");
  Serial.println(baud);
}

void loop() 
{
    while(softSerial.available() > 0) 
    {
      char a = softSerial.read();
      if(a == '\0')
        continue;
      if(a != '\r' && a != '\n' && (a < 32))
        continue;
      Serial.print(a);
    }
    
    while(Serial.available() > 0)
    {
      char a = Serial.read();
      Serial.write(a);
      softSerial.write(a);
    }
}

With this code built and uploaded to Arduino I launched the Serial monitor on my PC. After that I could type commands in my Serial Monitor and have the Arduino pass that only ESP8266 and read back the response. I can still see some junk chars coming back (in RED). All commands are in Green and could easily enumerate all Wifi in range using AT+CWLAP and even connect to my Wifi.

image

Wednesday, December 23, 2015

Publishing a ASP.NET 5 Web-Application to IIS Locally



I ran into few issues and discovered some kinks in publishing the new ASP.NET 5 Web-Application to an Internet Information Services (IIS) on the local box and then accessing it from other devices on the same network.

While there may be a number of different ways of doing this, the following worked for me.

Visual Studio

After you have create a new Project using File > New Project > ASP.NET Web Application

image

Change the build to use x64 and not ANY CPU

image

Now Right click on the project and choose publish. We will use File System publishing to push the output to a folder location and then get IIS to load it

image

Publish target is inside default IIS web root folder. This might be different for your setup.

clip_image001

Use 64 bit release in settings
clip_image002

Finally publish it

clip_image003

So with this step done your web application is now published to c:\inetpub\wwwroot\HomeServer

IIS

Now launch the IIS Manager by hitting Win key and searching for IIS Manager

Right click on default web-site and use Add Application.

clip_image001[7]

Create and point the application to the published app. Note that this is not the top level c:\inetpub\wwwroot\HomeServer, but rather the wwwroot folder inside it. This is required because the web.config is inside that folder. So we use c:\inetpub\wwwroot\HomeServer\wwwroot

clip_image002[7]

Hit, OK to create the web-app and then restart the web-site

clip_image003[7]

Now browse to the web-site, which in my case is http://localhost/HomeServer

clip_image004

Accessing from local network

To access the same website from other devices on the same network you need to enable access through the firewall. Search and select (Win key and type) “Allow an App Through Windows Firewall” then in the Control panel window that opens (Control Panel\System and Security\Windows Firewall\Allowed apps), click the “Change Settings” button and then check “World Wide Web Services”

image

Find the local servers IP by running the ipconfig command in command shell. Then you can reach this from other devices on the same network.

Screen shot of accessing the web-site from my cell phone connected to the same network over wifi.

clip_image006