Search

Monday, February 17, 2020

System Engineering Guidelines


While building our system that powers memory intensive compute in Azure we use the following engineering guidelines. We use these guidelines to build our BareMetal resource provider, cluster manager etc. These are useful principles we have accumulated from experience building various systems. What other principles do you use and recommend including?
  1. Close on broad design before sending PRs.
    1. Add design as markdown to root of the feature code path and discuss it as a PR. For broad cross RP feature it is ok to place it in the root
    2. For sizable features please have a design meeting
    3. Requirement for a design meeting is to send a pre-read and expectation is all attendees have reviewed the pre-read before coming in
  2. Be aware of distributed system quirks
    1. Think CAP theorem. This is a distributed system, network partition will occur, be explicit about your availability and consistency model in that event
    2. All remote calls will fail, have re-tries that uses exponential back-off. Log warning on re-tries and error if it finally fails
    3. Ensure we always have consistent state. There should be only 1 authoritative version of truth. Having local data that is eventually consistent with this truth is acceptable. Know the max time-period for eventual consistency
  3. System needs to be reliable, scalable and fault tolerant
    1. Always avoid SPOF (Single Point of Failure), even for absolutely required resources like SQLServer consider retrying (see below), gracefully fail and recover
    2. Have retries
    3. APIs need to be responsive and return in sub second for most scenarios. If something needs to take longer, immediately return with a mechanism to track progress on the background job started
    4. All API and actions we support should have a 99.9 uptime/success SLA. Shoot for 99.95
    5. Our system should be stateless (have state elsewhere in data-store) and designed to be cattle and not pets
    6. Systems should be horizontally scalable. We should be able to simply add more nodes to a cluster to handle more traffic
    7. Choose to use a managed service over attempting to build it or deploy it in-house
  4. Treat configuration as code
    1. Breaks due to out of band config changes are too common. So consider config deployment the same way as code deployment (Use SCD == Safe Config/Code Deployment)
    2. Config should be centralized. Engineers shouldn't be hunting around to look for configs
  5. All features must have feature flag in config.
    1. The feature flag can be used to disable features in per region basis
    2. Once a feature flag is disabled the feature should cause no impact to the system
  6. Try to make sure your system works on a single boxThis makes dev-test significantly easier. Mocking auxiliary systems is OK
  7. Never delete things immediately
    1. Don't delete anything instantaneously, especially data. Tombstone deleted data away from user view
    2. Keep data, metadata, machines around for a garbage collector to periodically delete at configurable duration.
  8. Strive to be event driven
    1. Polling is bad as the primary mechanism
    2. Start with event driven approach and have fallback polling
  9. Have good unit tests.
    1. All functionality needs to ship with tests in the same PR (no test PR later)
    2. Unit test tests functionality of units (e.g. class/modules)
    3. They do not have to test every internal functions. Do not write tests for tests' sake. If test covers all scenarios exposed by an unit, it is OK to push back on comments like "test all methods".
    4. Think what does your unit implement and can the test validate the unit is working after any changes to it
    5. Similarly if you add a reference to an unit from outside and depend on a behavior consider adding a test to the callee so that changes to that unit doesn’t break your requirements
    6. Unit test should never call out from dev box, they should be local tests only
    7. Unit test should not require other things to be spun up (e.g. local SQL server)
  10. Consider adding BVT to scenarios that cannot be tested in unit tests.
    E.g. stored procs need to run against real SqlDB deployed in a container during BVT, or test query routing that needs to run inside a web-server
  11. All required tests should be automatically run and not require humans to remember to run them
  12. Test in production via our INT/canary clusterSomethings simply cannot be tested on dev setup as they rely on real services to be up. For these consider testing in production over our INT infra.
    1. All merges are automatically deployed to our INT cluster
    2. Add runners to INT that simulate customer workloads.
    3. Add real lab devices or fake devices that test as much as possible. E.g. add fake snmp trap generator to test fluentd pipeline, have real blades that can be rebooted using our APIs periodically
    4. Bits are then deployed to Canary clusters where there are real devices being used for internal testing, certification. Bake bits in Canary!
  13. All features should have measurable KPIs and metrics.
    1. You must add metrics against new features. Metrics should tell how well your feature is working, if your feature stops working or if any anomaly is observed
    2. Do not skimp on metrics, we can filter metrics on the backend rather than not having them fired
  14. Copious logging is required.
    1. Process should never fail silently
    2. You must add logs for both success and failure paths. Err on the side of too much logging
  15. Do not rely on text logs to catch production issues.
    1. You cannot rely on too many error logs from a container to catch issues. Have metrics instead (see above)
    2. Logs are a way to root-cause and debug and not catch issues
  16. Consider on-call for all development
    1. Ensure you have metrics and logs
    2. Ensure you write good documentation that anyone in the team can understand without tons of context
    3. Add alerts with direct link to TSGs
    4. Add actionable alerts where the on-call can quickly mitigate
    5. On-call should be able to turn off specific features in case it is causing problems in production
    6. All individual merges can be rolled back. Since you cannot control when code snap for production happens the PRs should be such that it can be individually rolled back

Friday, February 14, 2020

The C Word - Part 2



Our confrontation with Lymphoma started in 2011, when the C word entered our life and my wife got diagnosed with Stage 4 Hodgkins sclerosing lymphoma. We have fought through and continue to do so. Even through she is in remission now, the shadow of Cancer still hangs over.

We had just moved across the world from India to the US in 2010 and had no family and very few friends around. We had to mostly duke it out ourselves with little support. We were able to get the best treatment available in the world through the Fred Hutch and Seattle Cancer Care Alliance. However, we do realize not everyone is fortunate to be able to do so.

Our daughter has decided to do her part now and raise funds through the Lymphoma and Leukemia Society. If you'd like to help her  please head to

https://events.lls.org/wa/SOYSeattle20/pbasu