Search

Thursday, December 26, 2013

Getting Arduino Uno work on Windows 8

I keep needing to do this and I couldn’t find one place where all the instructions are placed, so capturing it here. Also the standard instructions at http://arduino.cc/en/Guide/Windows didn’t work for me.

Get the Software

  1. Get Arduino Software from http://arduino.cc/en/Main/Software. Choose the Windows (ZIP file) and unzip it to local PC. I used D:\Skydrive\bin\arduino-1.0.5

Disable Driver Signature Enforcement

Unfortunately this step does disable a security feature of the OS, but I couldn’t find a way to do this otherwise.

  1. Open an command prompt and run the command
    shutdown.exe /r /o /f /t 00
  2. System restarts with Choose an option screen
  3. Select Troubleshoot
  4. Select Advanced options
  5. Select Windows Startup Settings
  6. Click Restart and it will restart into the Advanced Boot Options Screen
  7. Press the keyboard button for the number for Disable Driver Signature Enforcement (which was 7 in my case)
  8. System will restart with driver signature enforcement disabled.

Install The Driver

Press Windows key + W and type “Devices and Printers” and open that. Connect the Arduino board over USB. You should see something called Unknown Device shown in it.

image

Run the installer "D:\Skydrive\bin\arduino-1.0.5\drivers\dpinst-amd64.exe” or locate corresponding path from your installation folder. The window above should get updated as below.

image

You can also verify by again hitting Windows Key + W and typing Device Manager and launching it. Then expand to see the following

image

Friday, December 13, 2013

.NET: NGEN, explicit loads and load-context promotion

Sunset over the Pacific

If you want to know the conclusion and want to skip the details jump to the end for the climax :). If you care to see this feature in, please vote for this at http://visualstudio.uservoice.com/forums/121579-visual-studio/suggestions/5194915-ngen-should-support-explicit-path-based-assembly-l

Load-Context

In my previous post on how NGEN loads Native images I mentioned that NGEN images are supported only in the default load context. Essentially there are 3 load contexts (excluding Reflection-only context) and based on how you load an assembly it lands in one of those 3 contexts. You can read more about the load contexts at http://blogs.msdn.com/b/suzcook/archive/2003/05/29/57143.aspx. However for our purposes
  1. Default context: This is the context where assembly loaded through implicit assembly references or Assembly.Load(…) call lands
  2. LoadFrom context is where assemblies loaded with Assembly.LoadFrom call is placed
  3. Null-context or neither context is where assemblies loaded with Assembly.LoadFile, reflection-emit (among other APIs) are placed.
Even though a lot of people view the contexts only in the light of how they impact searching of assembly dependencies, they have other critical impact. E.g. native images of an assembly (generated via NGEN) is only loaded if that assembly is loaded in the default context.

#1 and #3 are pretty simple to understand. If you use Assemby.Load or if your assembly has other implicit assembly dependency then for those assemblies NativeBinder will search for their native images. If you try to load an assembly through Assembly.LoadFile(“c:\foo\some.dll”) then it will be loaded in null-context and will definitely not get native image support. Things get weird for #2 (LoadFrom).

Experiments

Lets see an simple example where I have an executable loadfrom.exe which has the following call
Assembly assem = Assembly.LoadFrom(@"c:\temp\some.dll");
some.dll has been NGEN’d as

c:\temp>ngen install some.dll
Microsoft (R) CLR Native Image Generator - Version 4.0.30319.17929
Copyright (c) Microsoft Corporation.  All rights reserved.
1>    Compiling assembly c:\temp\some.dll (CLR v4.0.30319) ...

Now we run the loadfrom.exe as follows
c:\temp>loadfrom.exe
Starting
Got assembly
In the the fusion log I can see among others the following messages

WRN: Native image will not be probed in LoadFrom context. Native image will only be probed in default load context, like with Assembly.Load().
LOG: Start validating all the dependencies.
LOG: [Level 1]Start validating native image dependency mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089.
Native image has correct version information.
LOG: Validation of dependencies succeeded.
LOG: Bind to native image succeeded.
Attempting to use native image C:\Windows\assembly\NativeImages_v4.0.30319_32\some\804627b300f73759069f96bac51811a0\some.ni.dll.
Native image successfully used.


Interestingly the native image was loaded for some.dll even though it was loaded using Assembly.LoadFrom. This was done in spite of loader clearly warning in the log that it will not attempt to load the native image.

Now lets trying running this same program just ensuring that the exe and dll is not in the same folder

c:\temp>copy loadfrom.exe ..
        1 file(s) copied.

c:\temp>cd ..
c:\>loadfrom.exe
Starting
Got assembly
In this case the log says something different

WRN: Native image will not be probed in LoadFrom context. Native image will only be probed in default load context, like with Assembly.Load().
LOG: IL assembly loaded from c:\temp\some.dll.

As you can see the NI image was not loaded.

The reason is Load Context promotion. When LoadFrom is used on a path from which a Load would’ve anyway found an assembly the LoadFrom results in loading the assembly in the default context. Or the load-context is promoted to the default context. In our first example since c:\temp\some.dll was on the applications base path (APPBASE) the load landed in default-context and ni was loaded. The same didn’t happen in the second example.

Conclusion

  1. NGEN images is only supported on the default context. E.g. for Assemblies loaded for implicit references or through Assemb.Load() API call
  2. NGEN images is not supported on explicit-loads done via Assembly.LoadFile(path)
  3. NGEN images is not reliably supported on explicit-loads done via Assembly.LoadFrom(path)
Given the above there is no real way to load Assemblies from arbitrary paths and get NGEN native image support. In the modern programming world a lot of large applications are moving away from the traditional GAC based approach to a more plug-in based, loosely-coupled-components approach. These large application locate it’s plug-ins via its own proprietary probing logic and loads them using one of the explicit path based load mechanisms. For these there is no way to get the performance boost based on native images. I think this is a limitation which CLR needs to address in the future.

Wednesday, December 11, 2013

.NET: Loading Native (NGEN) images and its interaction with the GAC


It’s common for people to think that NGEN works with strong named assemblies only and it places output files or uses GAC closely. This is mostly not true.
If you are new to this there’s a quick primer on NGEN that I wrote http://blogs.msdn.com/b/abhinaba/archive/2013/12/10/net-ngen-gac-and-their-interactions.aspx.

GAC

The Global Assembly Cache or GAC is a central repository where managed assemblies can be placed either using the command like gacutil tool or programmatically using Fusion APIs. The main benefits of GAC is
  1. Avoid dll hell
  2. Provide for a central place to discover dependencies (place your binary in the central place and other applications will find it)
  3. Allow side-by-side publication of multiple versions of the same assembly
  4. Way to apply critical patches, especially security patches that will automatically flow all app using that assembly
  5. Sharing of assemblies across processes. Particularly helpful for system assemblies that are used in most managed assemblies
  6. Provide custom versioning mechanisms (e.g. assembly re-directs / publisher policies)
While GAC has it’s uses it has its problems as well. One of the primary problem being that an assembly has to be strongly named to be placed in GAC and it’s not always possible to do that. E.g. read here and here.

NIC

The NIC or Native Image Cache is the location where NGEN places native images. When NGEN is run to create a native image as in
c:\Projects>ngen.exe install MyMathLibrary.dll

The corresponding MyMathLibrary.ni.dll is placed in the NIC. The NIC has a similar purpose as GAC but is not the same location. NIC is placed at <Windows Dir>\assembly\NativeImages_<CLRversion>_<arch>. E.g. a sample path is

c:\Windows\assembly\NativeImages_v4.0.30319_64\MyMathLibrary\7ed4d51aae956cce52d0809914a3afb3\MyMathLibrary.ni.dll

NGEN places the files it generates in NIC along with other metadata to ensure that it can reliably find the right native image corresponding to an IL image.

How does the .NET Binder find valid native images

The CLR module that finds assemblies for execution is called the Binder. There are various kinds of binders that CLR uses. The one used to find native images for a given assembly is called the NativeBinder.

Finding the right native image involves two steps. First the IL image and the corresponding potential native image is located on the file system and then verification is made to ensure that the native image is indeed a valid image for that IL. E.g. the runtime gets a request to bind against an assembly MyMathLibrary.dll as another assembly program.exe has dependency on it. This is what will happen
  1. First the standard fusion binder will kick in to find that assembly. It can find it either in 
    1. GAC, which means it is strongly named. The way files are placed in GAC ensures that the binder can extract all the required information about the assembly without physically opening the file
    2. Find it the APPBASE (E.g. the local folder of program.exe). It will proceed to open the IL file and read the assemblies metadata
  2. Native binding will proceed only in the default context (more about this in a later post)
  3. The NativeBinder finds the NI file from the NIC. It reads in the NI file details and metadata
  4. Verifies the NI is indeed for that very same IL assembly. For that it goes through a rigorous matching process which includes (but not limited to) full assembly name match (same name, version,  public key tokens, culture), time stamp matching (NI has to be newer than IL), MVID (see below)
  5. Also verifies that the NI has been generated for the same CLR under which it is going to be run (exact .NET version, processor type, etc…) .
  6. Also ensures that the NI’s dependencies are also valid. E.g. when the NI was generated it bound against a particular version of mscorlib. If that mscorlib native image is not valid then this NI image is also rejected
The question is what happens if the assembly is not strongly named? The answer is in that case MVID is used to match instead of relying on say the signing key tokens. MVID is a guid that is embedded in an IL file when a compiler compiles it. If you compile an assembly multiple times, each time the IL file is generated it has an unique MVID. If you open any managed assembly using ildasm and double lick on it’s manifest you can see the MVID

.module MyMathLibrary.dll
// MVID: {EEEBEA21-D58F-44C6-9FD2-22B57F4D0193}

If you re-compile and re-open you should see a new id. This fact is used by the NativeBinder as well. NGEN stores the mvid of the IL file for which a NI is generated. Later the native binder ensures that the MVID of the IL file matches with the MVID of the IL file for which the NI file was generated. This step ensures that if you have multiple common.dll in your PC and all of which has version 0.0.0.0 and is not signed, even then NI for one of the common.dll will not get used for another common.dll.

The Double Loading Problem

In early version of .NET when a NI file was opened the corresponding IL file was also opened. I found a 2003 post from Jason Zander on this. However, currently this is partially fixed. In the above steps look at step 1. To match NI with its IL a bunch of information is required from the IL file. So if that IL file comes from the GAC then the IL file need not be opened to get those information. Hence no double loading happens. However, if the IL file comes from outside the GAC then it is indeed opened and kept open. This causes significant memory overhead in large applications. This is something which the CLR team needs to fix in the future.

Summary

  1. Unsigned (non strong-named) assemblies can also be NGEN’d
  2. Assemblies need not be placed in GAC to ngen them or to consume the ngen images
  3. However, GAC’d files provide better startup performance and memory utilization while using NI images because it avoids double loading
  4. NGEN captures enough metadata on an IL image to ensure that if its native image has become stale (no longer valid) it will reject the NI and just use the IL

Tuesday, December 10, 2013

NGEN Primer


I am planning to write couple of NGEN/GAC related posts. I thought I’d share out some introductory notes about NGEN. This is for the a beginner managed developer.

Primer

Consider I have a math-library which has this simple C# code.
namespace Abhinaba
{
    public class MathLibrary
    {
        public static int Adder(int a, int b)
        {
            return a + b;
        }
    }
}

The C# compiler compiles this code into processor independent CIL (Common Intermediate Language) instead of a machine specific (e.g. x86 or ARM) code. That CIL code can be seen by opening the dll generated by C# compiler in a IL disassembler like the default ildasm that comes with .NET. The CIL code looks as follows
.method public hidebysig static int32  Adder(int32 a,
                                             int32 b) cil managed
{
  // Code size       9 (0x9)
  .maxstack  2
  .locals init ([0] int32 CS$1$0000)
  IL_0000:  nop
  IL_0001:  ldarg.0
  IL_0002:  ldarg.1
  IL_0003:  add  IL_0004:  stloc.0
  IL_0005:  br.s       IL_0007
  IL_0007:  ldloc.0
  IL_0008:  ret
} // end of method MathLibrary::Adder

To abstract away machine architecture the .NET runtime defines a generic stack based processor and generates code for this make-belief processor. Stack based means that this virtual processor works on a stack and it has instructions to push/pop values on the stack and instructions to operate on the values already inside the stack. E.g. in this particular case to add two values it pushes both the arguments onto the stack using ldarg instructions and then issues an add instruction which automatically adds the value on the top of the stack and pushes in the result. The stack based architecture places no assumption on the number of registers (or even if the processor is register based) the final hardware will have.

Now obviously there is no processor in the real world which executes these CIL instructions. So someone needs to convert those to object code (machine instructions). These real world processors could be from the x86, x64 or ARM families (and many other supported platforms). To do this .NET employs Just In Time (JIT) compilation. JIT compilers responsibility is to generate native machine specific instructions from the generic IL instructions on demand, that is as a method is called for the first time JIT generates native instructions for it and hence enables the processor to execute that method. On my machine the JIT produces the following x86 code for the add
02A826DF  mov         dword ptr [ebp-44h],edx  
02A826E2  nop  
02A826E3  mov         eax,dword ptr [ebp-3Ch]  
02A826E6  add         eax,dword ptr [ebp-40h]  

This process happens on-demand. That is if Main calls Adder, Adder will be JITed only when it is actually being called by Main. If a function is never called it’s in most cases never JITed. The call stack clearly shows this on-demand flow.
clr!UnsafeJitFunction <------------- This will JIT Abhinaba.MathLibrary.Adder 
clr!MethodDesc::MakeJitWorker+0x535
clr!MethodDesc::DoPrestub+0xbd3
clr!PreStubWorker+0x332
clr!ThePreStub+0x11
App!ConsoleApplication1.Program.Main()+0x3c <----- This managed code drove that JIT

The benefits of this approach are
  1. It provides for a way to develop applications with a variety of different languages. Each of these languages can target the MSIL and hence interop seamlessly
  2. MSIL is processor architecture agnostic. So the MSIL based application could be made to run on any processor on which .NET runs (build once, run many places)
  3. Late binding. Binaries are bound to each other (say an exe to it’s dlls) late which results in allowing more significant lee-way on how loosely couple they could be
  4. Possibility of very machine specific optimization. As the compilation is happening on the exact same machine/device on which the application will run

JIT Overhead

The benefits mentioned above comes with the overhead of having to convert the MSIL before execution. The CLR does this on demand, that is when a method is just going to execute it is converted to native code. This “just in time” dynamic compilation or JITing adds to both application startup cost (a lot of methods are executing for the first time) as well as execution time performance. As a method is run many times, the initial cost of JITing fades away. The cost of executing a method n times can expressed as

Cost JIT + n * Cost Execution

At startup most methods are executing for the first time and n is 1. So the cost of JIT pre-dominates. This might result in slow startup. This effects scenarios like phone where slow application startup results in poor user experience or servers where slow startup may result in timeouts and failure to meet system SLAs.

Also another problem with JITing is that it is essentially generating instructions in RW data pages and then executing it. This does not allow the operating system to share the generated code across processes. So even if two applications is using the exact same managed code, each contains it’s own copy of JITed code.

NGEN: Reducing or eliminating JIT overhead

From the beginning .NET supports the concept of pre-compilation by a process called NGEN (derived from Native image GENeration). NGEN consumes a MSIL file and runs the JIT in offline mode and generates native instructions for all managed IL functions and store them in a native or NI file. Later applications can directly consume this NI file. NGEN is run on the same machine where the application will be used and run during installation of that application. This retains all the benefits of JIT and at the same time removes it’s overhead. Also since the file generated is a standard executable file the executable pages from it can be shared across processes.
c:\Projects\ConsoleApplication1\ConsoleApplication1\bin\Debug>ngen install MyMathLibrary.dll
Microsoft (R) CLR Native Image Generator - Version 4.0.30319.33440
Copyright (c) Microsoft Corporation.  All rights reserved.
1>    Compiling assembly c:\Projects\bin\Debug\MyMathLibrary.dll (CLR v4.0.30319) ...

One of the problem with NGEN generated executables is that the file contains both the IL and NI code. The files can be quiet large in size. E.g. for mscorlib.dll I have the following sizes

Directory of C:\Windows\Microsoft.NET\Framework\v4.0.30319

09/29/2013  08:13 PM         5,294,672 mscorlib.dll
               1 File(s)      5,294,672 bytes

Directory of C:\Windows\Microsoft.NET\Framework\v4.0.30319\NativeImages

10/18/2013  12:34 AM        17,376,344 mscorlib.ni.dll
               1 File(s)     17,376,344 bytes

Read up on MPGO tool on how this can be optimized (http://msdn.microsoft.com/library/hh873180.aspx)

NGEN Fragility

Another problem NGEN faces is fragility. If something changes in the system the NGEN images become invalid and cannot be used. This is true especially for hardbound assemblies.

Consider the following code
class MyBase
{
    public int a;
    public int b;
    public virtual void func() {}
}

static void Main()
{
    MyBase m = new MyBase();
    mb.a = 42;
    mb.b = 20;
}

Here we have a simple class whose variables have been modified. If we look into the MSIL code of the access it looks like
L_0008: ldc.i4.s 0x2a
L_000a: stfld int32 ConsoleApplication1.MyBase::a
L_000f: ldloc.0 
L_0010: ldc.i4.s 20
L_0012: stfld int32 ConsoleApplication1.MyBase::b

The native code for the variable access can be as follows
            mb.a = 42;
0000004b  mov         eax,dword ptr [ebp-40h] 
0000004e  mov         dword ptr [eax+4],2Ah 
            mb.b = 20;
00000055  mov         eax,dword ptr [ebp-40h] 
00000058  mov         dword ptr [eax+8],14h 

The code generation engine essentially took a dependency of the layout of MyBase class while generating code to modify and update that. So the hard coded layout dependency is that compiler assumes that MyBase looks like

<base>MethodTable
<base> + 4a
<base> + 8b

The base address is stored in eax register and the updates are made at an offset of 4 and 8 bytes from that base. Now consider that MyBase is defined in assembly A and is accessed by some code in assembly B, and that Assembly A and B are NGENed. So if for some reason the MyBase class (and hence assembly A is modified so that the new definition becomes.
class MyBase
{
    public int foo;
    public int a;
    public int b;
    public virtual void func() {}
}

If we looked from the perspective of MSIL code then the reference to these variables are on their symbolic names ConsoleApplication1.MyBase::a, so if the layout changes the JIT compiler at runtime will find their new location from the metadata located in the assembly and bind it to the correct updated location. However, from NGEN this all changes and hence the NGEN image of the accessor is invalid and have to be updated to match the new layout

<base>MethodTable
<base> + 4foo
<base> + 8a
<base> + 12b

This means that when the CLR picks up a NGEN image is needs to be absolutely sure about it’s validity. More about that in a later post.

Friday, December 06, 2013

.NET: Figuring out if your application is exception heavy

Ocean beach
In the past I worked on an application which used modules from different teams. Many of these modules raised and caught a ton of exceptions. So much so that performance data was showing that these exceptions were causing issues. So I had to figure out an easy way to programmatically find out these code and inform their owners that exception is for exceptional scenarios and shouldn’t be used for normal code-flow :)
Thankfully CLR provides an easy hook in the form of an AppDomain event. I just need to hook into the AppDomain.FirstChanceException event and CLR notifies upfront when the exception is raised. It does that even before any managed code gets a chance to handle it (and potentially suppresses it).
The following is a plugin which throws and catches exception.
namespace Plugins
{
    public class FunkyPlugin
    {
        public static void ThrowingFunction()
        {
            try
            {
                Console.WriteLine("Just going to throw");
                throw new Exception("Cool exception");
            }
            catch (Exception ex)
            {
                Console.WriteLine("Caught a {0}", ex.Message);
            }
        }
    }
}

In the main application I added code to subscribe to the FirstChanceException event before calling the plugins
using System;
using System.Runtime.ExceptionServices;
using System.Reflection;

namespace foo
{
    public class Program
    {
        static void Main()
        {
            // Register handler
            AppDomain.CurrentDomain.FirstChanceException += FirstChanceHandler; 
            Plugins.FunkyPlugin.ThrowingFunction();
        }
        
        static void FirstChanceHandler(object o, 
                                       FirstChanceExceptionEventArgs e)
        {
            MethodBase site = e.Exception.TargetSite;
            Console.WriteLine("Thrown by : {0} {1}({2})", site.Module, 
                                                          site.DeclaringType, 
                                                          site.ToString());
            Console.WriteLine("Stack: {0}", e.Exception.StackTrace);
        }
    }
}

The FirstChanceHandler just dumps out the name of the assembly and type that raises the exception. The output of this program is as follows
Just going to throw
Thrown by : some.dll Plugins.FunkyPlugin(Void ThrowingFunction())
Stack:    at Plugins.FunkyPlugin.ThrowingFunction()
Caught a Cool exception

As you can see the handler runs even before the catch block executes and I have the full information of the assembly, type and method that throws the exception.

Behind the Scene

For most it might suffice to know that the event handler gets called before anyone gets a chance to handle the exception. However if you care about when this is fired, then its in the first pass (first chance) just after the runtime notifies the debugger/profiler.

The managed exception system piggy backs on native OS exception handling system. Though the x86 exception handling (FS:0 based chaining) is significantly different from the x64 (PDATA) it has the same basic idea
  1. From outside a managed exception looks exactly like a native exception and hence the OSes normal exception handling mechanism kicks in
  2. Exception handling requires some mechanism to walk the thread callstack on which the exception is thrown. So that it can find an up-level catch block as well as call the finally block of all functions in-between the catch and the point of exception being thrown. The mechanism varies in between x86 and x64 but is not super relevant for our discussion. (a series of data-structures pushed onto the stack in case of x86 or a series of data-structure table registered with OS in x64).
  3. On an exception the OS walks the stack and for managed function frames calls into CLR’s registered personality routine (that's what its called :)). This routine knows how to handle managed exceptions
  4. This routine notifies the profiler then the debugger of this first-chance exception, so that debugger can potentially break on the exception and do other relevant operations. If debugger did not handle the first chance exception the processing of the exception continues
  5. If there is a registered handler for FirstChanceException that is called
  6. JIT is consulted to find appropriate catch block for the exception (none might be found)
  7. The CLR returns the right set of information to the OS indicating that indeed the exception will be processed
  8. The OS initiates the second-pass
  9. For every function in between the frame of exception and the found catch block the CLR’s handler routine is called and the CLR consults the JIT to find the appropriate finally blocks and proceeds to call them for cleanup. In this phase the stack actually starts unwinding
  10. This continues till the frame in which the catch was initially found is reached. CLR proceeds to execute the catch block.
  11. If all is well the exception has been caught and processed and peace is restored to the world.
As it should be evident from the above basic flow the FirstChanceHandler will get called before any code gets the chance to catch it and also in case the exception will go unhandled.

PS: Please don’t throw an exception in the FirstChance handler :)

Wednesday, December 04, 2013

Bing it on

1468682_10151739093417274_1187424887_n

In early 2008 I joined the CLR team to clean garbage (or to write Garbage Collectors :)). It has been a wild ride writing generational Garbage Collectors, memory managers and tinkering with runtime performance or memory models. It’s been great to see people on the street use stuff that I had helped build or to see internal team reports as .NET updates/downloads went out to 100s of millions of machines. In one case I was under a medical diagnostic device clearly running on .NET. I didn’t run out indicating my faith in the stuff we built (or maybe I was sedated, who knows).

I decided to change things up a bit. So I decided to move from the world of devices, desktops and consoles to that of the cloud. From this week I have begun working in the Bing team. From now on I will no longer be a part of the team that builds CLR but will become part of the team which really pushes the usage of CLR to the extreme. Using .NET to serve billions of queries on thousands of machines.

I hope to continue blogging about CLR/.NET and provide a users perspective of the best managed runtime in the world.

BTW the photo above is the fortune cookie I got at my farewell lunch with the CLR team. Very appropriate.

Wednesday, October 02, 2013

Quick/Dirty Function Coverage using Windbg

To find code coverage at line and block granularity you need a full-fledged code coverage tool. However, sometimes you can use a quick and dirty trick in WinDbg to see which functions are being called. This works well when you need to do this for a small set of functions, which is what I recently needed to do. Lets say it was for all the functions of a class called Graph.

First I got the application under the debugger, wherein it automatically broke into the debugger at the application start. Then I added breakpoints to all these functions using the following

0:000> bm test!Graph*::* 10000
2: 003584a0 @!"test!Graph::Vertex::`scalar deleting destructor'"
3: 00356f80 @!"test!Graph::Vertex::~Vertex"
4: 00358910 @!"test!Graph::AddVertex"
5: 00356b70 @!"test!Graph::~Graph"
6: 003589d0 @!"test!Graph::RangeCheck"
7: 003589b0 @!"test!Graph::Count"
8: 00357ce0 @!"test!Graph::operator[]"
9: 003561a0 @!"test!Graph::Vertex::Vertex"
10: 00356170 @!"test!Graph::Vertex::Vertex"
11: 00358130 @!"test!Graph::`scalar deleting destructor'"
12: 003588a0 @!"test!Graph::AddEdge"
13: 003551e0 @!"test!Graph::Graph"

Here I am telling windbg to add breakpoints on all methods in the Graph class with a very large hit count of 0x10000. Then I just let the application proceed and play with the various controls. Finally I closed the application, when it again broke under the debugger. At this point I just list the breakpoints.

0:000> bl
1 e 00352c70 0001 (0001) 0:**** test!main
2 e 003584a0 fd8f (10000) 0:**** test!Graph::Vertex::`scalar deleting destructor'
3 e 00356f80 fcc7 (10000) 0:**** test!Graph::Vertex::~Vertex
4 e 00358910 ff38 (10000) 0:**** test!Graph::AddVertex
5 e 00356b70 ffff (10000) 0:**** test!Graph::~Graph
6 e 003589d0 d653 (10000) 0:**** test!Graph::RangeCheck
7 e 003589b0 c1de (10000) 0:**** test!Graph::Count
8 e 00357ce0 fda7 (10000) 0:**** test!Graph::operator[]
9 e 003561a0 fd8f (10000) 0:**** test!Graph::Vertex::Vertex
10 e 00356170 ff38 (10000) 0:**** test!Graph::Vertex::Vertex
11 e 00358130 ffff (10000) 0:**** test!Graph::`scalar deleting destructor'
12 e 003588a0 ec56 (10000) 0:**** test!Graph::AddEdge
13 e 003551e0 ffff (10000) 0:**** test!Graph::Graph

The key fields are marked below


13 e 003551e0 ffff (10000) 0:**** test!Graph::Graph


10000 indicates that after 0x10000 times the breakpoint is hit should the debugger actually break on this. FFFF indicates how many times it is left for this break to happen. So a simple subtraction (0x10000 – 0xFFFF) tells us that this function has been called once. It’s easy to see that one Graph object was created and destroyed (1 call to ctor and 1 to dtor) and that the Graph<T>::Count has been called 15906 times (0x10000 – 0xC1DE). So I didn’t really miss any of the functions in that test. If I did it would say 10000 (10000) for the function that I missed.

Saturday, September 28, 2013

Custom Resolution of Assembly References in .NET

Right now I am helping out a team with an assembly resolution conflict bug. I thought I’d share the details out, because I am sure others have landed in this situation.

In a complex managed applications, especially in cases where the application uses dynamically loaded plugins/addons, it’s common that all the assemblies required by these addons are not present in the assembly probing path. In the perfect world the closure set of all assembly references of an application is strong-named and the CLR binder uses standard probing order to find all assemblies and everything works perfectly. However, the world is not ideal.

Requirement of having to resolve assemblies from different locations do arise and .NET has support for that. E.g. this stackoverflow question http://stackoverflow.com/questions/1373100/how-to-add-folder-to-assembly-search-path-at-runtime-in-net has been rightly been answered by pointing to AssemblyResolve event. When .NET fails to find an assembly after probing (looking through) the various folders .NET uses for assemblies, it raises an AssemblyResolve event. User code can subscribe to this event and supply assemblies from whatever path it wants to.

This simple mechanism can be abused and results in major system issues. The main problem arises from over-eager resolution code. Consider an application A that uses two modules (say plugins) P1 and P2. P1 and P2 is somehow registered to A, and A uses Assembly.Load to load P1 and P2. However, P1 and P2 ships with various dependencies which it places in various sub-directories which A is unaware of and the CLR obviously doesn’t look into those folders to resolve the assemblies. To handle this situation both P1 and P2 has independently decided to subscribe to the AssemblyResolve event.

The problem is that for all cases CLR fails to locate an assembly it will call these resolve-event handlers sequentially. So based on the order of registering these handlers, it is possible that for a missing dependency of P2 the resolution handler of P1 gets called. Coincidentally it is possible that the assembly CLR is failing to resolve is known to both P1 and P2. Could be because the name is generic or because maybe it’s a widely used 3rd party assembly which a ton of plugins use. So P1 loads this missing assembly P2 is referring to and returns it. CLR goes ahead and binds P2 to the assembly P1 has returned. This is when bad things start to happen because maybe P2 needs a different version of it. Crash follows.

The MSDN documentation has already called out how to handle these issues in http://msdn.microsoft.com/en-us/library/ff527268.aspx. Essentially follow these simple rules

  1. Follow the best practices for assembly loading http://msdn.microsoft.com/en-us/library/dd153782.aspx
  2. Return null if you do not recognize the referring assembly
  3. Do not try to resolve assemblies you do not recognize
  4. Do not use Assembly.Load or AppDomain.Load to resolve the assemblies because that can result in recursive calls to the same ResolveEvent finally leading to stack-overflow.

The skeleton code for the resolve event handler can be something like

static Assembly currentDomain_AssemblyResolve(object sender, ResolveEventArgs args)
{
// Do not try to resolve assemblies which your module doesn't explicitly handle
// There will be other resolve handlers that are called in-sequence, let them
// do their job
if (args.RequestingAssembly == null || !IsKnownAssembly(args.RequestingAssembly))
return null;

// parse and create name of the assembly being requested and then use your own logic
// to locate the assembly
AssemblyName aname = new AssemblyName (args.Name);
string path = FindAssemblyByCustomLogic(aname);

if (!File.Exists(path))
return null;

Assembly assembly = Assembly.LoadFile(path);
return assembly;
}



Here you need to fill in the implementation of IsKnownAssembly which only returns true for assemblies that belong to your module.

Monday, April 29, 2013

Arduino Fun – Door Entry Alarm

 
Arduino UNO based door entry alarm

Physical computing and “internet of things” is a super exciting area that is unfolding right now. Even decades back one could hook up sensors and remotely get those data and process it. What is special now is that powerful micro-controllers are dirt cheap and most of us have in our pockets a really powerful computing device. Connecting everything wirelessly is also very easy now and almost every home has a wireless network.

All of these put together can create some really compelling and cool stuff where data travels from sensor over wireless networks into the cloud and finally into the cell phone we carry everywhere. I finally want to create a smart door so that I can get an notification while at work when someone knocks at our home door. Maybe I can remotely open the door. The possibilities are endless, but time is not, so lets see how far I get in some reasonable amount of time.

Arduino UNO

I unboxed the Arduino Uno Ultimate Starter Kit that I had got last week and spend some time with my daughter setting it up. The kit comes with a fantastic manual that helped me recap the basics of electronics. It contains an Arduino UNO prototyping board based on ATmega328 chip. It is a low-power 8-bit microprocessor running at max 20 MHz. To most people that seems paltry but it’s amazing what you can pull off with one of these chips.

The final assembled setup of the kit looks as follows. It comes with a nice handy plastic surface on which the Arduino (in blue) and a breadboard is stuck. It’s connected over USB to my PC.

IMG_9181

Getting everything up was easy with the instruction booklet that came. Only glitch was that Windows 8 wouldn’t let me install the drivers because they are not properly signed. So I had to follow the steps given here to disable driver verification.

Post that the Arduino IDE connected to the board and I could easily write and deploy code (C like syntax).

image

The tool bar icons are a bit weird though (side arrow for upload and up arrow for open????).

There was no way to debug through the IDE (or at least couldn’t find one). So I setup some easy printf style debugging. Basically you write to the serial port and the IDE displays it.

image

It was after this that I got to know that there’s a Visual Studio plugin with full debugging support. However, I haven’t yet used that.

The Project

imageI decided to start out with making a simple entry alarm and see how much time it takes to get everything done. In college I built something similar, but without a microcontroller (based on 555 IC and IR photo-transistors) and it took decent amount of time to hook up all the components. Basically the idea is that across the door there will be some source of light and a sensor will be on the other side. When someone passes in between the light on the sensor will be obstructed and this will sound an alarm.

When I last did it in college I really made it robust by using pulsating (at fixed frequency) IR LED as source and IR sensors. Now for this project I relied on visible light and the photo-resistor that came with the kit.

I built the following circuit.

image

Connected a photo-resistor in series with another 10K resistor and connected the junction to the analog input pin A0 of Arduino. Essentially this acts like a voltage divider. In bright light the junction and hence A0 input reads around 1.1 V. When light is obstructed the resistance of photo-resistor changes and the junction reads 2.6 V. The analog pins read in a range of 0 (means 0 volt) and 1023 (for 5V). So this roughly comes to around 225 in light and 530 in the shade. Obviously these are relative based on the strength of the light and how dark it becomes when someone obstructs the light. To avoid taking absolute dependency on the value I created another voltage divider using a potentiometer and connected that to another analog input pin A1. So now I can change the potentiometer to control a threshold value. If the voltage of A0 is above this threshold it would mean that it’s dark enough that someone obstructed the light falling on the resistor and it’s time to sound the alarm.

The alarm consists of flashing blue and red LEDs (obviously to match police lights) and a standard siren sound played using a piezo crystal that also came with the kit.

This full assembled and deployed setup looks as follows.

IMG_9167

Update*** The picture above says photo-transistor, it should be photo-resistor

Code

The key functions are setup() which is automatically called at startup and loop() which as the name suggests is called in a loop.setup() sets up the digital pins for output to drive the flashing LEDS. In loop() I read in the values of  photo-resistor and that from the potentiometer. Based on comparison I sound the alarm

// Define the constants
const int sensorPin = 0; // Photo-resistor pin
const int controlPin = 1; // Potentiometer pin
const int buzzerPin = 9; // Buzzer pin
const int rLedPin = 10; // Red LED pin
const int bLedPin = 11; // Blue LED pin

// Always called at startup
void setup()
{
// Set the two LED pins as output
pinMode(rLedPin, OUTPUT);
pinMode(buzzerPin, OUTPUT);
}

// This loops forever
void loop()
{
int sensorVal = analogRead(sensorPin);
int controlVal = analogRead(controlPin);

if(sensorVal < controlVal)
{
// Light is below threshold so sound buzzer
playBuzzer(buzzerPin);
}

delay(100);
}

void playBuzzer(const int buzzerPin)
{
for(int i = 0; i < 3; ++i)
{
// alternate between two tones, one high and one low
// at the same time alternate the blue and red LED flashing

digitalWrite(rLedPin, HIGH); // Red LED on
tone(buzzerPin, 400); // play 400 Hz tone for 500 ms
delay(500);
digitalWrite(rLedPin, LOW); // RED LED off

digitalWrite(bLedPin, HIGH); // Blue LED on
tone(buzzerPin, 800); // play 800Hz tone for 500ms
delay(500);
digitalWrite(bLedPin, LOW); // Blue LED off
}

// Stop the buzzer
noTone(buzzerPin);
}



imageNext Steps



This system has some obvious flaws. Someone can duck below or over the light-path or even shine a flashlight on the sensor while passing through. To make this robust consider using strips of mirrors on the two side and then use a laser (preferably IR) bouncing off them so that it’s virtually impossible to get through without breaking the light



Also you can decide to use a pulsating source of light and detect the frequency on the sensor. This will just make it more harder to break.

Saturday, April 06, 2013

Information Management on our California Road Trip

I am just back from a road-trip that spanned 3200 miles (5000kms) from Seattle to San Diego and back. We travelled with a 8 year old, which meant more fun and excitement, for everybody ;)

My digital gear on this trip was as follows

My Cyan Nokia Lumia 920

Wife’s White Lumia 920

Both with AT&T unlimited 4G connectivity.

The Live Tile based Windows Phone 8 worked amazingly well. See below for the apps I used
My daughter’s magenta Microsoft Surface RT

My Gray Surface RT

The tablets worked remarkably well and battery life was well beyond my expectation. My daughter could just shove in a USB thumb drive and watch videos off it directly on the Surface RT.
Wife’s Acer Aspire. She still uses a Laptop and I needed it to process photos. The Surface RT unfortunately falls short as it cannot run Photoshop Lightroom for me. This sleek powerful laptop with ultra fast startup (due to SSD) worked pretty well.
Cannon T1i (time to upgrade) and a bunch of lenses, flash, tripod, reflector, etc….
  Miscellaneous stuff, like thumb drives to backup photos, various chargers, etc…

wp_ss_20130325_0001_thumb[1]

I kept all of these stuff (other than the camera) in a single bag and carefully hauled them around. I ensured that they were never left unattended in the car.

My Windows Phone 8 (Nokia Lumia 920) rocked this trip. I am sure other phones would’ve worked as well, but the live tiles of Windows Phone really shined through. I installed a bunch of apps and pinned them all together.

One-Note: I literally run my life on One-Note these days. From planning trips, hikes, to debugging notes, I keep everything in One-note. Using One-Note I could plan my trip on the Surface from the hotel bed. The One-note doc would auto-sync with our phones ensuring that the information was on our finger-tips the next day when we were on the road. The large pinned California One-Note tile contained all our trip plans.

Weather App: I used The Weather Channel app. In the screen shot you can see the live tile for my next destination pinned to the home screen. WP8 allowed multiple of those to be pinned and I could always see what was coming up (sunny and mild day in SFO) in all the cities I was planning to be in. I could and did switch around order of cities I was travelling to based on weather forecast.

Nokia Drive + Maps + City Lens: In addition to the car GPS we used Nokia suite of tools including the Drive, Maps and City Lens to get our bearing while walking around in the city. Few times we used the phone for in car navigation as well. The maps didn’t let us down.

gMaps: Unfortunately the gMaps Google map app didn’t work out that well for me. Google maps has much better data in some contexts (e.g. trails and street view) but the app was unstable and also it always needed connectivity and reduced network coverage on the road was seriously hampering the usage. I think it is sad that Google doesn’t make their own apps for the Windows Phone.

Music: We used Nokia Music as it has better Bollywood coverage. We also got a ton of our Bengali music collection onto the phone. By the end of the trip, Prokriti (my daughter) was humming tunes from our college day music.

Hotels.com: I did most of my booking on hotels.com. Especially because it gives 1 day free stay on making 10 reservations. The limited screen on the device did pose some issues in making the right hotel choices but I even made a reservation while standing in a queue in Seaworld.

TripAdvisor: for finding local points of interest

Yelp: For deciding where to eat.

4th & Mayor for making foursquare checkins so that I can later recollect where I’d been.

Netflix: To keep my daughter busy when the trip really seemed long. I do have unlimited AT&T 4G on our phones but high-speed connectivity is hard to come by on the road. So she mostly watched stuff on her Surface.

Photoshop Lightroom: On most nights after settling down in the hotel, I would transfer photos from my camera and do a quick pass with Lightroom. I’d make reject and keep calls on them. Then backup the photos on a thumb-drive and also copy over to one of the Surface RT. So I essentially had 3 copies backed up at all times.

Tuesday, April 02, 2013

C++/CLI and mixed mode programming

beach

I had very limited idea about how mixed mode programming on .NET works. In mixed mode the binary can have both native and managed code. They are generally programmed in a special variant of the C++ language called C++/CLI and the sources needs to be compiled with /CLR switch.

For some recent work I am doing I had to ramp up on Managed C++ usage and how the .NET runtime supports the mixed mode assemblies generated by it. I wrote up some notes for myself and later thought that it might be helpful for others trying to understand the inner workings.

History

The initial foray of C++ into the managed world was via the managed extension for C++ or MC++. This is deprecated now and was originally released on VS 2003.  This MC++ syntax turned out to be too confusing and wasn’t adopted well. The MC++ was soon replaced with C++/CLI. C++/CLI added limited extension over C++ and was more well designed so that the language feels more in sync with the general C++ language specification.

C++/CLI

The code looks like below.

ref class CFoo
{
public:
CFoo()
{
pI = new int;
*pI = 42;
str = L"Hello";
}

void ShowFoo()
{
printf("%d\n", *pI);
Console::WriteLine(str);
}

int *pI;
String^ str;
};

In this code we are defining a reference type class CFoo. This class uses both managed (str) and native (pI) data types and seamlessly calls into managed and native code. There is no special code required to be written by the developer for the interop.


The managed type uses special handles denoted by ^ as in String^ and native pointers continue to use * as in int*. A nice comparison between C++/CLI and C# syntax is available at the end of http://msdn.microsoft.com/en-US/library/ms379617(v=VS.80).aspx. Junfeng also has a good post at http://blogs.msdn.com/b/junfeng/archive/2006/05/20/599434.aspx


The benefits of using mixed mode



  1. Easy to port over C++ code and take the benefit of integrating with other managed code
  2. Access to the extensive managed API surface area
  3. Seamless managed to native and native to managed calls
  4. Static-type checking is available (so no mismatched P/Invoke signatures)
  5. Performance of native code where required
  6. Predictable finalization of native code (e.g. stack based deterministic cleanup)

 


Implicit Managed and Native Interop


Seamless, static type-checked, implicit, interop between managed and native code is the biggest draw to C++/CLI.


Calls from managed to native and vice versa are transparently handled and can be intermixed. E.g. managed --> unmanaged --> managed calls are transparently handled without the developer having to do anything special. This technology is called IJW (it just works). We will use the following code to understand the flow.

#pragma managed
void ManagedAgain(int n)
{
Console::WriteLine(L"Managed again {0}", n);
}

#pragma unmanaged
void NativePrint(int n)
{
wprintf(L"Native Hello World %u\n\n", n);
ManagedAgain(n);
}

#pragma managed

void ManagedPrint(int n)
{
Console::WriteLine(L"Managed {0}", n);
NativePrint(n);
}

The call flow goes from ManagedPrint --> NativePrint –> ManagedAgain


Native to Managed


For every managed method a managed and an unmanaged entry point is created by the C++ compiler. The unmanaged entry point is a thunk/call-forwarder, it sets up the right managed context and calls into the managed entry point. It is called the IJW thunk.


When a native function calls into a managed function the compiler actually binds the call to the native forwarding entry point for the managed function. If we inspect the disassembly of the NativePrint we see the following code is generated to call into the ManagedAgain function

00D41084  mov         ecx,dword ptr [n]         // Store NativePrint argument n to ECX
00D41087 push ecx // Push n onto stack
00D41088 call ManagedAgain (0D4105Dh) // Call IJW Thunk


Now at 0x0D4105D is the address for the native entry point. If forwards the call to the actual managed implementation

ManagedAgain:
00D4105D jmp dword ptr [__mep@?ManagedAgain@@$$FYAXH@Z (0D4D000h)]

Managed to Native


In the case where a managed function calls into a native function standard P/Invoke is used. The compiler just defines a P/Invoke signature for the native function in MSIL

.method assembly static pinvokeimpl(/* No map */) 
void modopt([mscorlib]System.Runtime.CompilerServices.CallConvCdecl)
NativePrint(int32 A_0) native unmanaged preservesig
{
.custom instance void [mscorlib]System.Security.SuppressUnmanagedCodeSecurityAttribute::.ctor() = ( 01 00 00 00 )
// Embedded native code
// Disassembly of native methods is not supported.
// Managed TargetRVA = 0x00001070
} // end of method 'Global Functions'::NativePrint


The managed to native call in IL looks as

Manged IL:
IL_0010: ldarg.0
IL_0011: call void modopt([mscorlib]System.Runtime.CompilerServices.CallConvCdecl) NativePrint(int32)

The virtual machine (CLR) at runtime generates the correct thunk to get the managed code to P/Invoke into native code. It also takes care of other things like marshaling the managed argument to native and vice-versa.


Managed to Managed


While it would seem this should be easy, it was a bit more convoluted. Essentially the compiler always bound to native entry point for a given managed method. So a managed to managed call degenerated to managed -> native -> managed and hence resulted in suboptimal double P/Invoke. See http://msdn.microsoft.com/en-us/library/ms235292(v=VS.80).aspx


This was fixed in later versions by using dynamic checks and ensuring managed calls always call into managed targets directly. However, in some cases managed to managed calls still degenerate to double P/Invoke. So an additional knob provided was the __clrcall calling convention keyword. This will stop the native entry point from being generated completely. The pitfall is that these methods are not callable from native code. So if I stick in a __clrcall infront of ManagedAgain I get the following build error while compiling NativePrint.

Error	2	error C3642: 'void ManagedAgain(int)' : cannot call a function with
__clrcall calling convention from native code <filename>

/CLR:PURE


If a C++ file is compiled with this flag, instead of mixed mode assembly (one that has both native and MSIL) a pure MSIL assembly is generated. So all methods are __clrcall and the Cpp code is compiled into MSIL code and NOT to native code.


This comes with some benefits as in the assembly becomes a standard MSIL based assembly which is no different from another managed only assembly. Also it comes with some limitation. Native code cannot call into the managed codes in this assembly because there is no native entry point to call into. However, native data is supported and also the managed code can transparently call into other native code. Let's see a sample


I moved all the unmanaged code to a separate /C++:CLI dll as

void NativePrint(int n)
{
wprintf(L"Native Hello World %u\n\n", n);
}

Then I moved my managed C++ code to a new project and compiled it with /C++:PURE

#include "stdafx.h"
#include

#include "..\Unmanaged\Unmanaged.h"
using namespace System;

void ManagedPrint(int n)
{
char str[30] = "some cool number"; // native data
str[5] = 'f'; // modifying native data
Console::WriteLine(L"Managed {0}", n); // call to BCL
NativePrint(n); // call to my own native methods
printf("%s %d\n\n", str, n); // CRT
}

int main(array ^args)
{
ManagedPrint(42);
return 0;
}

The above builds and works fine. So even with C/++:PURE I was able to



  1. Use native data like a char array and modify it
  2. Call into BCL (Console::WriteLine)
  3. Call transparently into other native code without having to hand generate P/Invoke signatures
  4. Use native CRT (printf)

However, no native code can call into ManagedPrint. Also do note that even though Pure MSIL is generated, the code is unverifiable (think C# unsafe). So it doesn't get the added safety that the managed runtime provides (e.g. I can just do str[200]  = 0 and not get any bounds check error)


/CLR:Safe


/CLR:safe compiler switch generates MSIL only assemblies whose IL is fully verifiable. The output is not different from anything generated from say C# or VB.NET compilers. This provides more security to the code but at the same time losses on several capabilities over and above the PURE variant



  1. No support for CRT


  1. Only explicit P/Invokes

So for /CLR:Safe we need to do the following

[DllImport("Unmanaged.dll")]
void NativePrint(int i);

void ManagedPrint(int n)
{
//char str[3000] = "some cool number"; // will fail to compile with
//str[5] = 'f'; // "this type is not verifiable"

Console::WriteLine(L"Managed {0}", n);

NativePrint(n); // Hand coded P/Invoke

Migration


MSDN has some nice articles on people trying to migrate from /CLR to



  1. To /CLR:Pure http://msdn.microsoft.com/en-US/library/ms173253(v=vs.80).aspx


  1. To /CLR:Safe http://msdn.microsoft.com/en-US/library/ykbbt679(v=vs.80).aspx

Moving to Outlook from Google Reader

I am sure everyone by now knows that Google Reader is being shutdown. I am a heavy user of Google Reader or Greeder as I call it and I immediately started looking for an alternative, when this suddenly occurred to me, that all PC’s I use have Outlook installed on them. So if you work for an organization that runs on Exchange server, this could really work out well. You can use Office Outlook and Exchange as a great RSS feed reader. Consider this

  1. It will provide full sync across multiple Outlook clients running on different PCs
  2. It will provide on the go access via Outlook Web-access
  3. Your phone’s outlook client should also work with it
  4. You can pretend to work while wasting time on boingboing

First things first: Export the opml file from Google Reader

Login to www.google.com and then go to https://www.google.com/takeout/#custom:reader

image

This will take some time and create an archive.

image

Click on the download button and save the zip. Then extract the zip as follows

image

Inside the extracted folder you will have the opml file. For me it’s in C:\Users\admin\Desktop\XXXXXXXX@gmail.com-takeout\XXXXXXXX@gmail.com-takeout\Reader\subscriptions.xml

Import to Outlook

This opml file needs to be imported into outlook. Use the File tab and bring up the following UI in Outlook.

image

Then use Import. To bring up the following

image

Choose OPML file and tap on Next. Now point it to the file you extracted. For me it was C:\Users\admin\Desktop\XXXXXXXX@gmail.com-takeout\XXXXXXXX@gmail.com-takeout\Reader\subscriptions.xml

Hit next and finally choose the feeds you want to import (Select All).

image

The tap on Next and here you have Outlook as an Rss feed reader…

image

Read Everywhere

It totally sync’s on the cloud. Here I have it open on the browser. As you read a post it tracks what you are reading and across the browsers and all your outlook clients at work and home it will update and keep everything in sync.

image

Works great on the Windows Phone as well. I assume any Exchange client should work.

wp_ss_20130314_0002wp_ss_20130314_0001

Pain Points

While reading on Outlook was seamless, there are some usability issues in both the browser and the phone. Surface RT is broken. Someone should really fix the mail client on Surface Sad smile

The major paint point I see is that in the Outlook Web Access the pictures are not shown. Tapping on the picture placeholders work. I think some security feature is blocking the embedded images.

Also on the Windows Phone you have to go to each feed and set it up so that it syncs that folder. This is a pain but I guess this is to protect against download over the carrier networks.

C# code for Creating Shortcuts with Admin Privilege

Seattle skyline

 

Context

If you just care about the code, then jump to the end
 Smile

In the CLR team and across other engineering teams in Microsoft we use build environments which are essentially command shells with a custom environment setup using bat and cmd scripts. These scripts setup various paths and environment variables to pick up the right set of build tools and output paths matching the architecture and build flavor. E.g. one example of launching such a shell could be…

cmd.exe /k %BRANCH_PATH%\buildenv.bat <architecture> <build-flavor> <build-types> <etc...>

The architectures can vary between things like x86, amd64, ARM, build flavors vary between debug, check, retail, release, etc… The build-types indicate the target like desktop, CoreCLR, metro. Even though all combination is not allowed, the allowed combination approaches around 30. In case of .NET Compact Framework which supports way more architectures (e.g. MIPS, PPC) and targets (e.g. Xbox360, S60) the combination is even larger.


For day to day development I either need to enter this command each time to move to a different shell, or I have to create desktop shortcuts for all the combination. This becomes repetitive each time I move to a different code branch. I had created a small app that I ran each time I moved to a new branch and it would generate all the combination of shortcut given the branch details. However, our build requires elevation (admin-privilege). So even though I created the shortcuts, I’d have to either right click and use “Run as administrator” OR set that in the shortcuts property.


image


image


This was a nagging pain for me. I couldn’t find any easy programmatic way to create a shortcut with Administrator privilege (I’m sure there is some shell API to do that). So finally I binary compared two shortcuts, one with the “Run as administrator” and one without. I saw that only one byte was different. So I hacked up a code to generate the shortcut and then modify the byte. I am sure there is better/safer way to do this, but for now this “Works for me”.


The Code


Since I didn’t find any online source for this code, I thought I’d share. Do note that this is a major hack and uses un-documented stuff. I’d never do this for shipping code or for that matter anything someone other than me would rely on. So use at your own risk… Also if you have a better solution let me know and I will use that…


   1:  // file-path of the shortcut (*.lnk file)
   2:  string shortcutPath = Path.Combine(shortCutFolder, string.Format("{0} {1}{2}.lnk", arch, flavor, extra));
   3:  Console.WriteLine("Creating {0}", shortcutPath);
   4:  // the contents of the shortcut
   5:  string arguments = string.Format("{0} {1} {2} {3}{4} {5}", "/k", clrEnvPath, arch, flavor, extra, precmd);
   6:   
   7:  // shell API to create the shortcut
   8:  IWshShortcut shortcut = (IWshShortcut)shell.CreateShortcut(shortcutPath);
   9:  shortcut.TargetPath = cmdPath;
  10:  shortcut.Arguments = arguments;
  11:  shortcut.IconLocation = "cmd.exe, 0";
  12:  shortcut.Description = string.Format("Launches clrenv for {0} {1} {2}", arch, flavor, extra);
  13:  shortcut.Save();
  14:   
  15:  // HACKHACK: update the link's byte to indicate that this is a admin shortcut
  16:  using (FileStream fs = new FileStream(shortcutPath, FileMode.Open, FileAccess.ReadWrite))
  17:  {
  18:      fs.Seek(21, SeekOrigin.Begin);
  19:      fs.WriteByte(0x22);
  20:  }