Wednesday, October 10, 2018

SAP HANA Large Instances on Azure


Over the past year I have been working to light up bare-metal machines on Azure Cloud. These are specialized bare-metal machines that have extremely high amount of RAM and CPU and in this particular case, purpose built to run SAP HANA in-memory database. We call them the HANA Large Instance and they come certified by SAP (see list here).

So why bare-metal? They are huge high performance machines that goes all the way up to 24TB RAM (yup) and 960 CPU threads. They are purpose built for HANA in memory database and have the right CPU/Memory ratio and high performance storage to run demanding OLTP + OLAP workloads. Imagine a bank being able to load every credit card transaction in the past 5 year and be able to do analytics including fraud detection on a new transaction in a few seconds, or track the flow of commodities from the worlds largest warehouses to millions of stores and 100s of millions of customers. These machines come with 99.99% SLA and can be reserved by customers across the world in US-East, US-West, Japan-East, Japan-West, Europe-West, Europe-North, Australia-SouthEast, Australia-East to SAP HANA workloads.

In SAP TechEd and SAPPHIRE I demoed bare-metal HLI machines with standard Azure Portal integration. Right now customers can see their HLI machines in the portal and coming soon even reboot them from the portal.

Portal preview

Click on the screenshot below to see a recorded video on how the Hana Large Instances are visible on the Azure portal and also how customers can raise support requests from the portal.

Portal screenshot

Reboot Demo

This is something we are working on right now and will be available soon. Click on the screenshot below to see the video of a HANA Large instance being rebooted from the portal directly.image

Getting Access

Customers with HLI blades can run the following CLI command to register our HANA Resource Provider

az provider register --namespace Microsoft.HanaOnAzure

Or alternatively using the Go to your subscription that has HANA Large Instances, select “Resource Providers”, type “Hana” in the search box. Click on register.



Send them to

Friday, June 01, 2018

Deploy Cloud Dev Box on Azure with Terraform


Summary: See for a terraform based solution to deploy VMs in Azure with full remote desktop access.

Now the longer form :). I have blogged in the past about how to setup a Ubuntu desktop on Azure that you can RDP (remote desktop) into. Over the past few months I have moved onto doing most of my development work exclusively on cloud VM and I love having full desktop experience on my customized “Cloud Dev box”. I RDP into it from my dev box at work, Surface Pro, secure laptop etc.

I wanted to ensure that I can treat the box as cattle and not pet. So I came up with a terraform based scripts to bring up these cloud dev boxes. I have also shared them with my team in Microsoft and few devs are already using it. I hope it will be useful to you as well incase you want something like that. All code is at

A few things about the main terraform script at 

  1. It is a good security practice is to ensure that your VM is locked down. I use Azure NSG rules to ensure that the VM denies in-bound traffic from Internet. I accept parameters to the script where you can give IP ranges which will then be opened up. This ensures that your VM is accessible from only safe locations, in my case those are IP ranges of Microsoft (from work) and my home IP address.
  2. While you can use just the TF file and setup script I have a driver script at that you might find useful
  3. Once the VM is created I use remote execution feature of terraform to run the script in to install various software that I need including Ubuntu desktop and xrdp for remote desktop. This takes around 10 minutes atleast
  4. By default Standard_F8s machine is used, but that can be overridden with larger sizes (eg. Standard_F16s). I have found machines smaller than that doesn’t provide adequate performance. Note: You will incur costs for running these biggish VMs


Obviously you need terraform installed. I think the whole system works really well if you launch from because that way all the credential stuff is automatically handled, and cloud shell comes pre-installed with terraform.

If you want to run from any other dev box, you can need to have Azure CLI and terraform installed (use script for it) . Then do the following where subsId is the subscriptionId under which you want the VM to run.

az login
az account set --subscription="<some subscription Id>"

While you can download the files from here and use it, you should be better of by customizing the script and then running it. I use the following to run

curl -O
chmod +x
./ abhinab <password>



Now you can use a rdp client like mstsc to loginto the machine.

NOTE: In my experience 1080p resolution works well, 4K lags too much to be useful. Since mstsc default is full-screen be careful if you are working on hi-res display and explicitly use 1080p resolution.

There I am logged into my cloud VM.


Wednesday, May 16, 2018

Getting Azure Cloud Location


I have had got some ask on how to discover which Azure cloud the current system is running on. Basically you want to figure out if you are running something in the Azure public cloud or in one of the specialized government clouds.

Unfortunately this is not currently available in Instance Metadata Service. However, it can be found out using a an additional call. The basic logic is to get the current location over IMDS and then call Azure Management API to see which cloud that location is present in.

Sample script can be found at

locations=`curl -s -H Metadata:True ""`

# Test regions

endpoints=`curl -s` 
publicLocations=`echo $endpoints | jq .cloudEndpoint.public.locations[]`

if grep -q $locations <<< $publicLocations; then
    echo "PUBLIC"
    exit 1

chinaLocations=`echo $endpoints | jq .cloudEndpoint.chinaCloud.locations[]`
if grep -q $locations <<< $chinaLocations; then
    echo "CHINA"
    exit 2

usGovLocations=`echo $endpoints | jq .cloudEndpoint.usGovCloud.locations[]`
if grep -q $locations <<< $usGovLocations; then
    echo "US GOV"
    exit 3

germanLocations=`echo $endpoints | jq .cloudEndpoint.germanCloud.locations[]`
if grep -q $locations <<< $germanLocations; then
    echo "GERMAN"
    exit 4

echo "Unknown'
exit 0

This is what I see for my VM


Monday, March 26, 2018

Azure Serial Console


My team just announced the public preview of Azure Serial console. This has been a consistent ask from customers who want to recover VMs in the cloud.  Go to your VM in and then click on the Serial Console button


This opens a direct serial console connection to your VM. It is not required to have the VM open to internet. This is amazing to diagnose VM issues. E.g. if you are not able to SSH to the VM for some reason (blocked port, bad config change, busted boot config). You drop into the serial console and interact with your machine. Cool or what!!



To show you the difference between a SSH connection and serial console, this is my machine booting up!!


Friday, October 06, 2017

Remote Ubuntu desktop on Azure


For the past many months I have moved to have my dev boxes on the cloud. I am happily using a monster Windows VM and an utility Ubuntu desktop in the cloud. I realized after talking to a few people that they don’t realize how easy it is to setup Linux remote desktop in Azure cloud. Here goes the steps.

For VM configuration I needed a small sized VM with large enough network bandwidth to support remoting. Unfortunately on Azure you cannot choose networking bandwidth but rather all the VMs in a box gets networking bandwidth proportional to the number of cores they have. So I just created a VM based on the “Standard DS4 v2 Promo (8 vcpus, 28 GB memory)” and connected it to Microsoft ExpressRoute. If you are ok with public IP you skip setting the express route and ensure your VM has a public IP.

Then went to the Portal and enabled RDP. For that in the portal choose VM –> Networking and add rule to enable RDP.


Finally I sshed into my VM with

c:\bin\putty.exe abhinaba@AbhiUbuntu

Time to now install a bunch of tools

sudo apt-get update
sudo apt-get install xrdp
sudo apt-get install ubuntu-desktop
sudo apt-get install xfce4
sudo apt-get update

Setup xsession

echo xfce4-session > ~/.xsession

Open the following file

sudo gvim /etc/xrdp/

Set the content to


if [ -r /etc/default/locale ]; then
    . /etc/default/locale
    export LANG LANGUAGE


start the xrdp service

sudo /etc/init.d/xrdp start

And then from my windows machine mstsc /v:machine_ip. I am presented with the login screen


then I have full Ubuntu desktop on Azure :)


Thursday, September 14, 2017

Distributed Telemetry at Scale


In Designing Azure Metadata Service I elaborated on how we run Azure Instance Metadata Service (IMDS) at massive scale. Running at this scale in 36 regions (at the time of writing) of the world, on incredible number of machines is a hard problem to solve in terms of monitoring and collecting telemetry. Unlike other centralized services it is not as simple as connecting it to a single telemetry pipeline and get done with it.

We need to ensure that

  1. We do not collect too much data (cost/latency)
  2. We do not collect too less (hard to debug issues)
  3. Data collection is fast
  4. We are able to drill down into specific issues and areas of problem
  5. Do all of the above when running in 36 regions of the world
  6. Continue to do all of the above as Azure continues it’s phenomenal growth

To meet all the goals we take a three pronged approach. We break out telemetry to 3 paths

  1. Hot-path: Minimal numeric data that can be uploaded super fast (few second delayed) that we can use for monitoring our service and alert in case anomaly is detected
  2. Warm-path: More richer textual data that are few minute delayed and we can use this to drill down into issues remotely in case hot-path flagged an issue
  3. Cold-path: This gives us full fidelity data to monitor



Even though we run on so many places we want to ensure that we have near real time alerting and monitoring and can quickly catch if something bad is happening. For that we use performance and functionality counters. These counters measure the type of response we are giving back, their latencies, data size etc. All of them are numeric and track each call in progress. We then have high speed uploaders in each machine with backends that can collect these. Then we attach these counters with alerts at per cluster level. We can catch latency issues, failures with few seconds delays. These counters only tell us if something is going bad and not why they are doing so. We have 10s of such numeric high speed telemetry coming from each IMDS instance.

Here’s a snapshot of one such counter in our dashboard showing latency at 90th percentile.


In addition we have external dial-tone services that keep pinging IMDS to ensure the services are up everywhere. If there is no response then likely there has been some crash or other deadlocks. We measure the dial-tone as part of our up-time and also have alerts tied to this.



If hot-path counter driven alerts tell us something has gone wrong and an on-call engineer is awaken, the next steps of business is to quickly figure out what’s going on. For that we use our warm-path pipeline. This pipeline uploads informational and error level logging. Due to volume the data is delayed by few minutes. The query granularity can also slow down fetching them. So one of the focus of the hot-path counters is that it can narrow down the location of problem to cluster level/machine level.

The alert directly filters the logs being uploaded to a cluster/machine and brings up all logs. In most cases they are sufficient for us to detect issues. In case that doesn’t work we need to go into the detailed logs.


Every line of logs (error/info/verbose) our service creates is stored locally on the machines with a certain retention policies. We have built tools so that given an alert an engineer can run a command from his dev box to fetch the log directly from that machine, wherever in the world the machine with the log exists. For hard to debug issues this is the last recourse.

However, more cooler is that we use our CosmosDB offering as a document store and store all error and info logs into that. This ensures the logs remain query-able for a long time (months) for reporting and analysis. We also run jobs that read the logs from these cosmos streams and then shove it into Kusto as structured data. Kusto is also available to users with the more fancier name of Azure Application Insights Analytics. I was floored with the insight we can get with this pipeline. We upload close to 8 terabytes of log data a day into cosmos and still able to query all data over months in a few seconds

Here’s a quick peek into seeing what kind of responses IMDS is handing out.


A look into the kinds of queries coming in.


Distribution of IMDS version being asked for.


We can extract patterns from the logs, run regex matching and all sorts of cool filters and at the same time be able to render data across our fleet in seconds.

Monday, September 11, 2017

Designing Azure Metadata Service


Some time back we have announced the general availability of Azure Instance Metadata Service (IMDS). IMDS has been designed to deliver instance metadata information into every IaaS virtual machines running on Azure over a REST endpoint. IMDS works as a data aggregation service and fetches data from various sources and surfaces it to the VM in a consistent manner. Some of the data can already be on the physical machine running the VM and others could be inside other regional service which are remote from the machine.

As you can imagine the scale of usage of this service is immense and spans across globe (at the time of writing 36 regions across the world) and Azure usage doubles YoY. So any design for IMDS has to be highly scalable and built for future growth.

We had many options to build this service both based on the various reliability parameters we wanted to hit as well as in terms of engineering ease.


Given a typical cloud hierarchical layout, you can imagine such a service to be built in any one of the following ways

  1. Build it like any other cloud service that runs on its own IaaS or PaaS infrastructure, with load-balancers, auto-scaling, mechanisms for distributing across regions, sharding etc.
  2. Dedicate machines in clusters or data centers that run this service locally
  3. Run micro-services directly in the physical machines that host the VMs

Initially building a cross region managed service seems like a simpler choice. Pick up any of the standard REST stack, deploy using any of the many deployment models available in Azure and go with that. With auto-scaling, load balancers it should just work and scale.

Like with any distributed systems we looked into our CAP model.

  1. Consistency: We could live with a more relaxed eventual consistency model for metadata. You can update the metadata of a virtual machines by making changes to it in the portal or using Azure CLI and eventually the virtual machine gets this last updated value
  2. Availability: The data needs to be highly available because various key pieces in the azure internal stack takes dependency on this metadata along with customer code running inside the VM
  3. Partition: The network is highly partitioned as is evident from the diagram above

Metadata of virtual machines is updated less frequently, however is used heavily across the stack (reads are much more common than updates). We needed to guarantee very high availability over a very highly partitioned infrastructure. We chose to optimize on partition tolerance and availability with eventual consistency. With that having a regional service was immediately discarded because it is not possible to provide high enough availability with that model.

Coupled with the above requirements and our existing engineering investments we chose to go with approach #3 of running IMDS as a micro service on each Azure host machine.

  1. Data is fetched and cached on every machine, which means that data is lower in liveliness but is always eventually consistent as data gets pushed into those machines. Varying levels of liveliness exists based on what specific source the metadata is fetched from. Some metadata anyway needs to be pushed into the machine before it is applied and is hence always live, others like say VM tags has lower liveliness guarantee
  2. Since the data is served from the same physical machine, the call doesn’t leave the machine at all and we can provide very high availability. Other than ongoing software deployments and system errors the data is always available. There is no network partition.
  3. There is no need to further balance load or shard out data because the data is on the machine where it is being served. The solution automatically scales with Azure because more customers means more Azure machines running them and more placed IMDS can run on
  4. However, deploying and telemetry at this scale is tough. Imagine how large Azure deployment is and consider deploying and updating a service that runs everywhere on it.

It’s really fun working on problems on this scale and it’s always a learning experience. I look forward to share more details on my blog here.

Friday, September 08, 2017

Azure Instance Metadata Service


One of the projects in Microsoft Azure that I have been involved with is the instance metadata service (IMDS) for Azure. It’s a massively distributed service running on Azure that among other things brings metadata information to IaaS virtual machines running on azure.

IMDS is documented at Given that the API is already well documented at that location and like all services will evolve to encompass more scenarios in the future, I would not repeat that effort here. Rather I wanted to cover the background behind some of the decisions in the API design.

First lets look at the API itself and break it down to it’s essential elements

D:\>curl -H Metadata:True "
api-version=2017-04-02&format=text" compute/ network/

Metadata API is REST based and available over a GET call at the non-routable IP address of This IP is reserved in Azure for some time now and is also used for similar reasons in AWS. All calls to this API has to have the header Metadata:True. This ensures that the caller is not blindly forwarding an external call it received but is rather deliberately accessing IMDS.

All metadata is rooted under /metadata/instance. In the future other kinds of publicly available metadata could be made available under /metadata.

The Api-versions are documented in the link shared above and the caller needs to explicitly ask for a version, e.g. 2017-04-02. Interestingly it was initially named 2017-04-01, but someone in our team thought that it’s not a great idea to ship the first version of an API based on April fools day.

We did consider supporting something like “latest”, but experience tells us that it leads to fragile code. As versions will be updated, invariably some user’s scripts/code depending on latest to be of some form breaks. Moreover, it’s hard from our side to also gauge what versions are being used in the wild as users may just use latest but have implicit dependency on some of the metadata values.

We support two formats, JSON and text. On using JSON you can fetch the entire metadata and parse it on your side. A sample from Powershell screen shot is shared below.


However, we wanted to support a simple text based approach as well. It’s easiest to imagine the metadata as a DOM (document object model) or even a directory. On asking for text format at any level (the root being /metadata/instance) the immediate child data is returned. In the sample above the top level compute and network is returned. They are each in a given line and if that line ends with a slash, it indicates that the data has more children. Since compute/ was returned we can fetch it’s children by the following.

D:\>curl -H Metadata:True "
&format=text" location name offer osType platformFaultDomain platformUpdateDomain publisher sku version vmId vmSize

None of them have a “/” suffix and hence they are all leaf level data. E.g. we can fetch the unique id of the VM and the operating system type with the following calls

D:\>curl -H Metadata:True "
api-version=2017-04-02&format=text" c060492e-65e0-40a2-a7d2-b2a597c50343
D:\>curl -H Metadata:True "
api-version=2017-04-02&format=text" Windows

The entire idea being that the API is usable from callers like bash-scripts or other cases that doesn’t want or need to pull in a JSON parser. The following bash script pulls the vmId from IMDS and displays it

vmid=$(curl -H Metadata:True "
api-version=2017-04-02&format=text" 2>/dev/null) echo $vmid

I have shared a few samples of using IMDS at

Do share feedback and requests using the URL

Monday, November 14, 2016

Magic Mirror

I just implemented a MagicMirror for our home. Video footage of it working.

Basically I used the open source I had to do a few changes. I added a hideall module  so that everything is hidden and comes visible on detecting motion. My sources are at

This is the first prototype working on desk monitor with a 1mm two-way mirror held in front of it.


It was evident that the thin 1mm acrylic mirror is no good, because it bent easily giving weird distorted images. I moved to a 3mm 36” by 18” mirror and started working on a good sturdy frame.



I used 3x1 wood for the size which is actually 2.5 x 0.75 inches. On that I placed a face-frame.

I had a smaller 27” old monitor and decided to just use that. I mounted and braced the monitor with L shaped brackets. So it is easy to take out as well as hold the monitor firmly in place.


Final pictures


Thursday, June 23, 2016

Caribbean Vacation planning in Summer


I just came back from a vacation in Aruba. A friend asked me why Aruba as he thought is was an unusual choice.


I was trying to explain how my choice had a large data driven approach. Then I thought I’d share how I do it.

Why Now

First of lets get the constraints out of the way. Because choosing between “where to go at this time” vs “when to go to place X” needs different data crunching. I had to go now, the time is not a choice and is governed by the fact that I had manageable work load, it is summer and school vacations are on(I cannot go when it is not), I cannot go later during summer because of other engagements. Also I wanted to go relax in a Caribbean destination. So the question came down to where in the Caribbean and I used historical data to make the choice. And boy did that work out!!

Weather Data

First of it is hurricane season in the Caribbean so my first quest was to find a place where I am least likely to be hit by one. For that I headed onto the NOAA site which maps all hurricanes at a given place for the last 100 or more years. Choosing Bahamas and then all hurricanes only (not even storms) for June/July shows me the following. So you can make it out that it is a super bad idea.


I did the same search for other popular destinations in that area. E.g. Aruba for the last 100 years, shows not a single one hit this island.


Now that we know a hurricane doesn’t generally hit this place, lets see precipitation data.


So it’s very unlikely to rain either. Wow I am sold already.

Other Data

Obviously weather is not the only criteria. I did similar search with crime rates, prevalence of diseases etc. And then when I narrowed down to the area around the ABC islands (Aruba, Curacao, Bonaire), I finally made my choice on Aruba based on the kind of prices I got on hotels.

Wednesday, May 04, 2016

Customizing Windows Command Shell For Git session


Old habits die hard. From my very first days as developer in .NET and Visual Studio , I am used to have my windows cmd shell title always show the branch I am working on and have a different color for each branch. That way when I have 15 different command window open, I always know which is open where. Unfortunately when I recently moved to GIT, I forgot to customize that and made the mistake of making changes and checking in code into the wrong branch. So I whipped up a simple batch script to fix that.

git checkout %1
for /f "delims=" %%i in ('git rev-parse --abbrev-ref HEAD') do set BRANCH=%%i

title %BRANCH%

REM Aqua for branch Foo
if "%BRANCH%" == "Foo" color 3F 

REM Red for branch bar
if "%BRANCH%" == "Bar" color 4F

REM Blue
if "%BRANCH%" == "dev" color 1F

I saved the above as co.bat (short for checkout) and now I switch branches using the co <branch-name>.

You can see all the color options by running color /? in your command window.




Monday, February 22, 2016

Identifying your Arduino board from code

For my IoT project I needed to write code slightly differently for specific Arduino boards. E.g. for Arduino UNO I wanted to use Serial to talk to ESP8266 and for Arduino Mega wanted to use Serial1. So basically I wanted to use Board specific #defines

#ifdef MEGA
    #define SERIAL Serial1
#elif UNO
    #define SERIAL Serial


For that I needed to get board specific #defines for each of the board options in the Arduino IDE, so that as I change the board type in the IDE, I automatically build code for that specific board.


That information is actually available in a file called board.txt inside your arduino install folder. For me it is G:\arduino-1.6.5\arduino-1.6.5-r5\hardware\arduino\avr\boards.txt. For each board there is a section inside that file and the relevant entries look something as follows

############################################################## Uno


The .board entry when prefixed by ARDUINO_ becomes the #define. I wrote a quick PowerShell routine to get all such entires. The code for it is in GitHub at

$f = Get-ChildItem -Path $args[0] -Filter "boards.txt" -Recurse
foreach($file in $f)
    Write-Host "For file" $file.FullName
    foreach ($l in get-content $file.FullName) {
        if($l.Contains(".name")) {
            $b = $l.Split('=')[1];

        if($l.Contains(".board")) {
            $s = [string]::Format("{0,-40}ARDUINO_{1}", $b, ($l.Split('=')[1]));
            Write-Host $s


Given the argument of root folder or Arduino install, you get the following.

PS C:\Users\abhinaba> D:\SkyDrive\bin\ListBoard.ps1 G:\arduino-1.6.5\
For file G:\arduino-1.6.5\arduino-1.6.5-r5\hardware\arduino\avr\boards.txt
Arduino Yún                            ARDUINO_AVR_YUN
Arduino/Genuino Uno                     ARDUINO_AVR_UNO
Arduino Duemilanove or Diecimila        ARDUINO_AVR_DUEMILANOVE
Arduino Nano                            ARDUINO_AVR_NANO
Arduino/Genuino Mega or Mega 2560       ARDUINO_AVR_MEGA2560
Arduino Mega ADK                        ARDUINO_AVR_ADK
Arduino Leonardo                        ARDUINO_AVR_LEONARDO
Arduino/Genuino Micro                   ARDUINO_AVR_MICRO
Arduino Esplora                         ARDUINO_AVR_ESPLORA
Arduino Mini                            ARDUINO_AVR_MINI
Arduino Ethernet                        ARDUINO_AVR_ETHERNET
Arduino Fio                             ARDUINO_AVR_FIO
Arduino BT                              ARDUINO_AVR_BT
LilyPad Arduino USB                     ARDUINO_AVR_LILYPAD_USB
LilyPad Arduino                         ARDUINO_AVR_LILYPAD
Arduino Pro or Pro Mini                 ARDUINO_AVR_PRO
Arduino NG or older                     ARDUINO_AVR_NG
Arduino Robot Control                   ARDUINO_AVR_ROBOT_CONTROL
Arduino Robot Motor                     ARDUINO_AVR_ROBOT_MOTOR
Arduino Gemma                           ARDUINO_AVR_GEMMA

So now I can use

#ifdef  ARDUINO_AVR_MEGA2560
    // Serial 1: 19 (RX) 18 (TX);
    #define SERIAL Serial1
    #define SERIAL Serial
#endif // ARDUINO_MEGA

Sunday, January 24, 2016

ESP8266 Wifi With Arduino Uno and Nano

If you are trying to add Wifi connectivity to an existing Arduino project or have serious aspirations for developing a Internet of Things (IoT) solution, Arduino + ESP8266 wifi module is one  of the top choices. Especially the Nano because it is super cheap (<$3) and is very small in size. Using some sort of web-server directly on ESP8266 (e.g. via Lua) doesn't cut it due to the lack of IO pins on ESP8266. You can get a full IoT node out at under $12 with a few sensors, Arduino Nano and a ESP9266 module (excluding the power supply).

Inspite of a plethora of posts online it turned out to be very hard for me to get this to combination to work. I spent atleast 3-4 days until I actually got this right. The main problem I see is that a lot of the solutions online are actually down-right incorrect, not-recommended or for other similar boards (e.g. Arduino Mega). Also there are a few gotchas that were not commonly called out. Before I start let me get all of those out of the way

  1. Arduino Uno/Nano is very different from say Mega which can supply more current and have different number of UART. The steps to make a Uno and Nano work is different from them.
  2. Power Supply
    1. ESP8266 is powered by 3.3V and NOT 5V. So you cannot have a common power supply between Arduino and ESP8266
    2. ESP8266 draws way more current (200mA) then it can be supplied by the 3.3v pin on the Uno/Nano. Don’t even try them, I don't buy anyone who claims to have done this. Maybe they have some other high power variant of Arduino (Mega??) that can do this.
    3. So you either use a 3.3v 1A power supply to ESP8266 with common ground with the 5V powering Arduino, or you use a step down 5v to 3.3v (e.g. like here).
  3. Arduino <-> ESP8266
    1. All the ESP8266 I bought  came with the UART serial IO speed (BAUD) set to 115200. Now the problem is that Uno/Nano has only one HW serial, which is set to be used for communicating with the PC over USB with which you are debugging. You can use any other two IO pins to talk to the ESP8266 using SoftwareSerial, but it does not support that high a BAUD speed. If you try 115200 to communicate with Arduino <-> ESP8266 you will get garbage. A lot of articles online show a setup with Arduino Mega which does have two HW serial IO using which you can easily get 115200 and more. So you need to dial the ESP8266 settings to move the communication speed to a more manageable BAUD of 9600
    2. Arduino IO pins have 5V and ESP8266 accepts 3.3 v (max 3.6). I have seen people directly connect the pins but you are over driving the ESP8266. If it doesn’t burn out immediately (the cheaper ones does), it will burn out soon. I suggest you use a voltage divider using simple resistor to have Arduino transmission (TX) drive ESP8266 receive (RX)
    3. For some strange reason D2/D3 pins on Arduino Nano didn’t work for me for the communicating with ESP8266. I have no explanation for this and it happened on two separate Nano. The Arduino would just read a whole bunch of garbage character. So I had to move to the pins 8/9.
    4. In spite of whatever I did, garbage characters would still come in sometimes. So I wrote a small filter code to ignore them


Things you need

  1. ESP8266
  2. Arduino Nano
  3. Power supply 5v and 3.3v
  4. Resistors 1K, 2.2K, 10K
  5. FTDI USB to serial TTL adapter. Link (optional, see below)

Setting up ESP8266

imageAs mentioned above I first set the ESP8266 BAUD rate to 9600. If yours is already 9600 then nothing to be done, if not you need to make the following connection

PC (USB) <-> FTDI <-> ESP8266

Then using specific AT commands from the PC set the 9600 BAUD rate on the ESP8266. I used the following circuit. Where the connections are as follows

FTDI TX –> Via voltage divider (to move 5v to ~3.3v) to ESP8266 RX (blue wire)
FTDI RX –> Directly to ESP8266 TX (green wire). A 3.3v on Nano I/0 pin will be considered as 1.
FTDI GND to common ground (black)

ESP8266 GND to common GND (black)
ESP8266 VCC to 3.3v (red)
ESP8266 CH_PD to 3.3v via a 10K  resistor (red)

Power supply GND to common GND


One that is set bring up Arduino IDE and do the following using the menu

  1. Tools –> Port –>COM{n}. For me it was COM6
  2. Then Tools –> Serial monitor

In the serial monitor ensure you have the following set correctly. The BAUD should match the preset BAUD of your ESP8266. If you are not sure, use 115200 and type the command AT. If should return OK, if not try changing the BAUD, until you get that.


Then change the BAUD rate by using the following command, and you should get OK back


After that immediately change the BAUD rate in the serial monitor to be 9600 baud as well and issue a AT command. You should see OK. You are all set for the ESP8266.

Setting up Arduino Nano + ESP8266

This step should work for Uno as well. Essentially make the same circuit as above, but now instead of FTDI use an Arduino. I used pins 8 and 9 on Arduino for the RX and TX respectively.



Debugging and Setup WIFI

Even though I could easily run AT commands with the PC <->FTDI <-> ESP8266, I ran into various issues while doing the same programmatically in PC <->Arduino <-> ESP8266 setup. So I wrote the following very simple code to pass on commands I typed in the PC via the Arduino to the ESP8266 and reverse for outputs.

The code is at GitHub as

#include <SoftwareSerial.h>
SoftwareSerial softSerial(8, 9); // RX, TX

void setup() 
  uint32_t baud = 9600;
  Serial.print("SETUP!! @");

void loop() 
    while(softSerial.available() > 0) 
      char a =;
      if(a == '\0')
      if(a != '\r' && a != '\n' && (a < 32))
    while(Serial.available() > 0)
      char a =;

With this code built and uploaded to Arduino I launched the Serial monitor on my PC. After that I could type commands in my Serial Monitor and have the Arduino pass that only ESP8266 and read back the response. I can still see some junk chars coming back (in RED). All commands are in Green and could easily enumerate all Wifi in range using AT+CWLAP and even connect to my Wifi.