Saturday, May 20, 2017

365 days at Microsoft UK

I completed my first year at Microsoft UK earlier this month. The fact that I haven’t been blogging is probably the best evidence for the best summary I can give: it’s been busy!

I work in a team dedicated to help some of our larger customers go to the cloud, giving guidance and architectural support across almost the entire range of Azure Services. I have retained one of my focuses from before joining Microsoft, on Azure Service Fabric, and have effectively become the go-to person in the team for this technology.

Working for a large American company – in the largest subsidiary after the US - with a deep hierarchy, moving from a small and agile company, was a big change. One thing I have found curious is that I have more autonomy and freedom in the way I approach each customer engagement or manage my time and where I work than I ever had at my own company. The culture of autonomy and collaboration runs deep, with everybody being encouraged to grow, rely and help others, and the fact that I’m working in what is by far the most interesting area of what the company is doing – Microsoft Azure – is a large part of what makes the challenge interesting. I have to admit I miss some things: working in close knit teams, having more technical depth with concrete problems, and having more participation in decisions that affect me personally.

I not only changed company, but also countries. I quickly noticed some differences between Portugal and the UK:

  • I have spent more time on the phone and conference calls in this last year than in the entirety of my career! The headset is always with me. Most of the meetings, either with colleagues or customers, happen over Skype. Even if f2f is important, traveling is expensive and time consuming.
  • The meetings start on time and very rarely overrun;
  • No “solutionizing” in meetings – most of the meetings I attend are either to communicate or to make decisions. Work is done offline. Most of the meetings, either with colleagues or customers, happen over Skype. Even if f2f is important, traveling is expensive and time consuming.
  • A lot can be achieved with 20 or 30 minute calls, if people are disciplined and focused. This was unheard of, back in Portugal.
  • When I came, my lunch slot was almost always hijacked by meetings. Days where I have 6 calls back to back are common, and if I want to protect my time (and sanity!) I have refuse meetings at lunch time, and make sure I reserve time for things like travel [to London or other places] or to do self-training.
  • A lot of work is done remotely, and the company doesn’t really care where I am physically as long as I am working towards reaching my targets and goals.

Regarding being in the UK in an extra professional environment, I do feel a big difference in scale and challenge, but not everything is positive: Brexit affected me (personally) a lot and hurt my image of the country massively – even if London is a world apart. Also, this country has a serious problem with infrastructure, from telecoms (e.g., GSM or 3G coverage) to public transportation with constant delays, problems and strikes. Comparatively, in Portugal there is a a service to be proud of! 

I had been lacking the inspiration and time to look back at this past year, but today I had by evaluation meeting, and thought it appropriate to retrospect. So there.

Saturday, May 14, 2016

Joined Microsoft UK

Great news! After several years working closely with people at Microsoft in Lisbon as a CTO of Create It, as of Monday May 9, I have joined Microsoft in the UK subsidiary. I'll be working as a Cloud Solutions Architect in the EPG team, completely focused on Azure, my topic of choice in the last few years.

This blog will continue existing, hopefully with more frequent posts and how more focused on Azure Architecture. One disclaimer I must do is that the opinions and views expressed in this blog are mine and do not necessarily state or reflect those of Microsoft. Just so you know :-).

Wish me luck!

Tuesday, February 23, 2016

How (not) to hire - notes on hack.summit()’s session

Pluralsight’s virtual hack.summit() event started yesterday, and I managed to follow most of the sessions. The last session was by Hampton Catlin (“Created Sass, Haml, m.wikipedia.org, and book author") on the topic of “How (not) to Hire”. The session is partially a reflection on the success of past hires, but also the sharing of alternative and (lots of) interesting ideas on how to hire people/developers.

One of ideas I thought was most relevant was that you WILL do bad hires, and that you should face the idea of letting people go as naturally as possible: “It’s your job as a manager”. I’ve had to do it in the past, and it’s not easy, but he’s right. “What are you afraid of when hiring someone? that he’ll bring down the company?” These questions frame the process in a “lighter way”: you don’t have to get it right at first, and should accept that you'll make mistakes. I guess most of us are so cautious when hiring because of the effort/time associated, but also because of the failure itself and being associated with it. I’ve both done bad hires in the past, and also seen the “stigma” associated with it in a colleague, so accepting that mistakes will inevitably happen should be in our minds to avoid this - no mistakes, no learning.

Another relevant insight from the session was the difference between Positive and Negative hiring. In Negative hiring, you try to find problems, reasons NOT to hire. In Positive hiring, you try to find out how awesome someone can be, what are his strong points, how he can make a difference. I bet most of us that hire follow the first approach (me included). He says that he used to follow the “Veto Rule”, whereby if anyone on the team wasn’t convinced the interviewee was awesome, they wouldn’t hire. This is exactly what I did for most of my career, on the footsteps of “Peopleware”’s recommendation to involve the team in the hiring process. An alternative approach, he mentions, is the “Believer Rule”, whereby if anyone on the team is really convinced someone would be awesome, he should be hired. The truth is probably somewhere in the middle, but this a great idea.

Catlin also advocates that you should hire mostly depending on someone's ability to work as a team/collaborate, and not only technical skills – “It’s easier to teach code than it is to teach empathy”. This is true, but I’d say there must also be a technical baseline.

Another idea, and this was to me the most interesting one, was the suggestion that candidates themselves should design their own hiring process, giving them a chance to show off and see their best side. What the candidate chooses is obviously a clue into what his strong and weak points are. I find this idea brilliant, and will keep this in mind for the future.

Another suggestion that he gives is that just discussing technology with candidates is a very effective way of assessing the person, and his/her passion, interest and skill in technology. Asking about some specific project, what they think of Azure vs AWS, or of C# vs Python, etc.

Finally, during the Q&A there was another interesting insight, or framing of the hiring process overall. Some companies (I think I’ve read about Google doing this) try to hire people that are better than the average of the company, constantly “raising the bar”. The insight is this: if you keep doing this, at some moment in time you yourself should NOT be employed at the company! :)

Anyway, lots of good ideas in the session, to keep in mind for the future.

Thursday, February 11, 2016

SIGFOX Makers tour Lisbon

I just spent the afternoon at SIGFOX’s event in Lisbon, learning about the company offering and doing some hands-on demos with an Arduino Uno and SIGFOX’s network.
What is SIGFOX? To be honest, I didn’t exactly know, when I got the Meetup email from Productized about the event. SIGFOX is an telecom operator, exploring a range of radio frequencies and a kind of modulation (UNB – Ultra-Narrow Band) that makes it especially interesting for the IoT world.

Up until today, my only experience with IoT was with the Raspberry Pi, and the code to handle the wifi connectivity in case of failures has given me several headaches. It seems bluetooth, which is more widely used with Arduinos, is also not a simple process, due to the need to pair, for example. SIGFOX’s network promises several advantages in this area: simplicity in sending (and receving messages) from the network is a question of doing a send call – no need to pair, connect, authenticate, etc. There are base stations (much like in GSM) that get the message, de-duplicate it, make it reach SIGFOX’s infrastructure, and either process it there or route it to our own application server via an HTTP call. Another major selling point is the low power usage: the protocol used is designed to maximize energy efficiency and use energy only when needed - essentially, it uses 20-35mA when sending messages, and the chip is supposed to be in standby most of the time, when not communicating – the communications pattern assumes the device polls for incoming commands when required/regularly. A solution using this approach is supposed to have up to have several years of autonomy, supposedly.

This simplicity and low power consumption, clearly major selling points, are offset by some limitations in the range of applications: messages have a payload of 12 bytes (xml and json are out, as are applications regarding media transmission) – this means you’ll be optimizing messages on the bit level; additionally, to comply with european regulations in the usage of the spectrum, you can only send up to 140 messages per day (about one message every 10 minutes), and finally the transmission rate is pretty low: 100 bits/sec – so it pays to keep messages as small as possible. Even with these limitations, it’s pretty clear there is a wide range of applications for this approach – my Raspberry Pi temperature setup uses Wifi and is constantly plugged in, a SIGFOX+Arduino solution would probably just require batteries.

Interestingly, there is already network coverage in Portugal (an in the Netherlands, France and Spain – as well as several other major cities worldwide), and prices are supposed to be very affordable (2€ to 25€/year, if I understood correctly), depending on factors such as usage. In one of the demos, a message sent by the board was picked up by 4 base stations – the workshop was in the city’s downtown.
Snootlab_Akero_ArduinoUno
The platform has mechanisms (callbacks) that can handle requests from the devices, either semi-automatically by parametrizing preset responses (ex: return the current time), or by making HTTP calls to a configured application server on our end. Interestingly, Azure Event Hubs (which I also use in the Raspberry Pi tests) are also natively supported, with the network automatically posting incoming messages from devices into a configured event hub – and the Azure IoT Suite will also be supported.

I thoroughly enjoyed the afternoon in the workshop, and was surprised at how easy it was to use the network and the Arduino. I’d seen demos of Arduino before, and to be honest was not looking forward to be back in coding C, but it was easier than it thought (I still have this book around, from when I learned the language in high school).

As to doubts and issues: IoT security seems to be in the news every day, so this is obviously a concern. I don’t know much on the topic, but device authentication, server authorization, and server authentication in the device seem obvious concerns. The trainer, Nicolas (great job!), didn’t have much time so he didn’t expand on the topic (and I still don’t have the slides with me), but it is a concern I’ll have to explore.

The SIGFOX demos done in the workshop are available on GitHub. The board provided can read the temperature (with a degree resolution), so I might do the exercise of converting my C# code from the Raspberry to run on the Arduino.

PS: I have very basic electricity/electronics know-how, and somehow see myself with a couple of Raspberry’s, now an Arduino, and several sensors, cables and breadboards on the way from AliExpress. What is happening to me? :-)

Monday, January 11, 2016

IoT: Raspberry Pi2 and Wireless conectivity losses

As I have described elsewhere, I have setup my Pi2 with a temperature sensor which posts the information to an Azure Event Hub. I’ve made the code more robust to handle different errors, but eventually the Pi2 – every 3 to 4 days - would just stop sending readings and responding to pings.
The problem is apparently related to the Wifi dongle, so (with a little help from a friend) I found the commands that restart the wlan, and after adding the code, the problem seems to be fixed. Here’s the relevant C# code:

public static void RestartWifi()
{
    Process proc = new Process
    {
        StartInfo = new ProcessStartInfo
        {
            FileName = "bash",
            Arguments = "-c '" + @"/sbin/ifdown \'wlan0\'" + "'",
            UseShellExecute = false,
            RedirectStandardOutput = true,
            CreateNoWindow = true
        }
    };

    proc.Start();
    Thread.Sleep(5000);
    proc = new Process
    {
        StartInfo = new ProcessStartInfo
        {
            FileName = "bash",
            Arguments = "-c '" + @"/sbin/ifup --force \'wlan0\'" + "'",
            UseShellExecute = false,
            RedirectStandardOutput = true,
            CreateNoWindow = true
        }
    };

    proc.Start();
}

Thursday, January 7, 2016

Videos of 2015’s sessions online

Just found out about 3 videos of recent sessions being published online, so I’m posting the links.
The video of my WebCamp 2015 session on Azure WebApps is now online at Channel 9. The 45 minutes were clearly not enough for everything I wanted to show! My summary of the session is here.

Also available at Channel 9 is the session I did earlier this year at TechRefresh 2015 on the Azure App Service architecture, where I focused much more on Azure LogicApps. My summary of the session is here.

Finally, the video stream of the roundtable at Merge Lisbon #3 is available now on Youtube. The roundtable itself starts at around 01h45m. My summary of the session is here.

Monday, January 4, 2016

«Creating a great [software] engineering culture» roundtable at Merge Lisbon #3 – Post event thoughts

(this post is somewhat late, but this time of the year is always busy).

I thoroughly enjoyed the roundtable at Merge Lisbon #3. It was a very interesting and lively discussion, and having two people coming from the online product world/startup, plus two from the development/system integration services world, contributed a lot to the discussion and exchange of different points of view. RUPEAL’s organization was top-notch, and I’ll be waiting for the future editions of Merge Lisbon.

Like I said, there were several topics on the table, starting with what creates a great engineering culture, what is culture anyway, and what is the influence of the “founding fathers” of a company in its culture (”if the founders live 10.000km away, not much”, was one of the answers).
Having founded a services company, of which I was the CTO for 14 years, these are questions that made me think and that I feel as my own. In my view, my company was a mix of the personal views of each of the founders (which mostly – but obviousy not exacly – coincided). A vision of values like agility, transparency, qualitity in the delivery, informality, etc. In time, I do think that this changed - one day at a time - towards a more processual way of working, a trend that was luckily identified and reversed.

Reflecting on my own experience, I think that a founder working in his own company has a bond that is much stronger than that of employees, who can leave whenever they want. Making sure that he creates an adequate workspace, with a great team, challenging work, great culture – is essential to his own happyness and the company itself. And this leads me to one of the points mentioned that night, that a lot of this culture issue relates to “fit” - between the people and the founders and the way the company does business. Which reminds me of the recent example of Holacracy implementation at Zappos – and if I can empathise with the wish to have a specific way of managing the company, I can’t feel the same regarding the “either you’re in, or leave” way of putting things. I’m sure Zappos lost amazing talent in that process. Was it worth it?

One of the big topics discussed at the roundtable was the Hiring process, and how to decide who to hire. My take is that while technical skills are critical, behavioural fit is also as essential, and being able to work as a team. One reference I gave out was the (absolutelly mandatory) book Peopleware by Tom DeMarco and Timothy Lister, which suggests that the team which the new employee is joining should be involved in the hiring/interview process. Other books that have shaped my view on this are Belbin’s Management Teams: Why they work or fail and Team Roles at Work. These defend that – at least in management teams – complementarity between the “types” of people is what makes the teams work. One extreme example is: does anyone believe a team of 5 people like Steve Jobs would get anything done?

Some roles Belbin identifies are the Shaper (the classical “I Have a Vision” guy), the Chairman/Coordinator, the Creative (“Plant”), etc. Joining these views together leads, however, to a very real risk: that of creating a “mono culture”, where people who have very different team roles don’t get hired. For example, a company with a focus on “conservative” team roles will tend not to hire creatives/disruptors (“Plant” and “Resource Investigator”, in Belbin’s terms). Would you hire someone like Elon Musk for your team, if you were recruting him?
I do ask myself if this role categorization captures all there is to it, and tend to think it doesn’t. In my first job, over 90% of the people (web designers/creatives, all) voted in the same left-wing party, for example. Some more food for thought.

A specific question from the audience was on the value of having handbooks with the “principles” and vision of a company, much like Valve’s Handbook for new Employees. While I agree that these documents can be seen as a way of “marketing”, I still see value in them. They represent a vision for what the company strives to be, helping keeping focus/remembering of what the core values are. Time generaly turns organizations into the convervative/bureocrat side (as Jullian Birkinshaw brilliant argued in his Coursera course on Managing the Company of the Future), and these documents can help fight that tendency. Other examples I know of are Ricardo Semler’s “Employee Survival Manual” at Semco (a sample of which is at the end of his book, Maverick), and recently I found out about Outsystem’s The Small Book of The Few Big Rules as a a great portuguese example.

The topic of being a specialist (someone with specific skills in a given area of technology) or a generalist (broad know-howin several areas of technology) was also on the table. The trend seemed to tend towards a tendency to prefer specialists in the hiring process, people who can quickly deliver results in their area of expertise. My only take here was a sentence from an experienced software architect I admire professionally, when describing himself: “I’m an expert in generalities”. This is somewhat obvious if you think about it, but as you grow in experience you will start doing architecture, and it’ll be impossible to know all the technical details of the different parts of your solutions. You’ll have to focus more on knowing the capabilities, and forget about knowing the specific details of each technology. In my experience, making this leap (realizing you don’t have to be an expert in everything) is perhaps the most important one on the career path from developer to tech lead/architect.

Another of the questions from the audience was on how to keep the culture of a company when a team is working at a customer’s site. This is a challenge I am well aware of, because of my work in the last 4 years, and my answer to this was to take your company’s culture with you. One example (a small thing) is just shake everybody’s hands when reaching the site – incluing the client’s people - , and going for a cup of coffee together [as you would in your office]. But this is a simplistic answer and example. Other factors that I think are important are: try to bring people over to your office regularly, even if just for half a day every two weeks, have leadership/management people work at the site as well (even if on different contexts), in internal communication try not to forget that some people are out of the office (for example, if scheduling training sessions, birthday lunches, etc.). “But what if the client’s culture turns out to be better than yours?” “That’s good on two ways: first, you’ve learned something about what motivates your team, and second, you can try to bring that experience to your own company”.

There is a promotional video of the event, which you can checkout at KWAN’s Facebook page, for a small taste of what it was.

Thursday, December 10, 2015

«Creating a great [software] engineering culture» roundtable at Merge Lisbon #3

This evening I’ll be participating, together with people from local companies Uniplaces, Talkdesk and GrandUnion, in a panel on how to create a great software engineering culture, at the third meeting of Merge Lisbon.

An interesting topic, in a world were startups are born everyday focused on delivering an MVP as soon as possible (and not quality), where people stay less and less time in their jobs or simply freelance and pick only the really enticing projects, where Agile can on occasion lead to sloppiness, where people (rightly) enjoy working from home, and where – very often – the “bits are worth less”, for example when developing microsites for events or apps for short-lived festivals. Being based in Lisbon, I could also add: and where people leave all the time to work in the UK, the Netherlands or Switzerland.

I’ll come back here after the panel, surely with new ideas.

More information here (but it’s been fully booked for some time, I’m told).

Monday, December 7, 2015

[Micro-post] Puretext has a new version!

I’ve been using this little tool for years now, and just noticed it has a new version. PureText is a simple tool that stays in your tray icon, uses up 0,1Mb in RAM (!), and allows you to do Windows-V to paste text, removing formatting. You can change the key combination, but this one is perfect.

Extremely useful! Completely free, downloadable here.

Friday, December 4, 2015

«Azure WebApps: Why aren’t you using them yet?» @ Microsoft WebCamp 2015 - Lisboa

Microsoft Portugal held the yearly WebCamp event this last Wednesday, focused on Web Technologies,both from Microsoft and Open Source. The openness to OSS is clear, with sessions focusing specific to that approach to software development. The term “best of breed” does come to mind, in the sense that more and more solutions include parts by different sources/vendors, that have to work well together.

Anyway, my session was focused on Azure WebApps, and I expected the technology to be familiar to most people by now, but was surprised to find out it is not. The session had an enterprise-ready focus, which I had to adjust somewhat as a consequence.

As usual, the session was demo-heavy, and unfortunately the 40 minutes were not enough to show them all. Here’s what I had planned:

  • Create a site and publish from Visual Studio – showing the simplicity of the process and the new window in the SDK 2.8.1;
  • WebApps in the Azure Resource Manager – open up https://resources.azure.com and show how the site resources and their settings are represented;
  • Backup and Restore – using a site (a to-do app sample) I had previously deployed, I showed how the Backup and Restore features work, including the database-embed capability. For me, one of the killer features of this PaaS offering;
  • Remote Debugging – it’s always amazing, being able to do remote debug to code that is running remotely in the cloud. There was a specific session on Application Insights, so I opted to not go into it;
  • Staged Publishing – deployment slots are huge, both in terms of supporting the dev/test/quality environments, but also in team development itself (e.g., having a slot for each developer). Had a chance to show a swap and the “setting stickyness”, but had to skip the “A/B testing” support (where x% of the traffic is directed to a slot, and the rest to another one);
  • Traffic Manager – this is another impactful demo: having pre-created sites in North Europe, Brazil and Japan, used traffic manager to unify them under a single domain name, and then www.whatsmydns.net to check the dynamic resolution. Always a great demo (I’m impressed myself Smile);

The session time was only 40 minutes, so other demos had to be left out:

  • Redis Session State Provider – the idea here was to explain about Redis (an instance of which I had pre-created), install the package with the session state provider, change web.config, and redis-cli to show the keys/sessions being added to the repository;
  • Scaling – having a Redis has the backend to the user’s session state, the obvious next demo was using Scaling/Automatic scaling to see it working, and showing how scaling back down to one single server didn’t imply a loss of session;
  • Web Tests (not to be confused with Load Tests) – one of the newest features in WebApps is Web Tests, a service that works similarly to what services like AreMySitesUp provide: access your site from several locations worldwide, and if for example 3 of them can’t reach it more than 3 times, send and email alert. Discrete, but helpful.
  • IP Blocking – this final demo was addressing one specific complaint about the way IP Blocking works in Azure WebApps: either you include the blocked IPs in Web.config, or use App Service Environment (ASE). ASE implies the Premium service tier, which is costly. Adding an IP in the web.config implies a site restart, and if you have sites in several regions, you have to make the change in all of them. So the demo goes the applicational path: the IPs to block are simply added to Redis (ex: IPB1.2.3.4) and an Action Filter in the MVC project checks the source IP and returns HTTP 403’s if it’s in the blocked list. Quick and yet not dirty Smile.

That was it. A full room, I’m just sorry I didn’t have the time to show everything I had prepared. Maybe I should do some screencasts?

Monday, November 16, 2015

Revista Programar: Azure Logic Apps: o futuro dos backends? [Portuguese]

My article "Azure Logic Apps: o futuro dos backends?" just made the cover of the 50th edition of the "Revista Programar" magazine. The article describes my view of the historical evolution from Mashups to SOA to Microservices, and describes the current version of Azure Logic Apps, Microsoft's implementation of that architectural view.

If you do happen to read portuguese :), the direct link to the article is here.

Friday, June 26, 2015

IoT: Raspberry Pi2 and Azure Event Hubs and Mono and SQL Database–experiences

A couple of months ago I bought a Pi2 , to complement the Pi1 I use mostly as a media center. I also bought modmypi’s Raspberry Pi YouTube Workshop Kit, a pack that includes a breadboard, cables, a set of sensors, and that pairs with a set of Tutorial videos on how to set it up. The tutorials are all done using Python, but my goal was (obviously) to do the same using .Net/Mono on raspian.

Using an approach and code that initially was similar for example to Jan Tielens’ in his “Raspberry Pi + GPIOs with DS18B20 + Azure + C# = Internet Thermometer” article, and which I’ll describe in a later post, I now have my Pi2 sending temperature readings to an Azure Event Hub using REST, from where it is read by Azure Stream Analytics and then dropped into an Azure SQL Database. I still hope to wire this up to PowerBI, but there doesn’t seem to be a way at the moment to connect my MSDN Azure account with my corporate account where we have 100 PowerBI licenses, so that will have to wait.

What I wanted to share for now are some tips regarding the process, which are not described elsewhere in other articles I read on the net, and which I guess are very specific to the IoT/sensor world (to which I am new). Keep in mind that my simple goal was to have the Pi2 send temperature readings to Azure every minute.

Service Bus Queue vs Event Hub

My initial code was posting readings to an SB Queue. I didn’t antecipate using Event Hubs, since an event every minute doesn’t justify the platforms’ capabilities. Turns out that Azure Stream Analytics doesn’t support SB Queues as the source, so I had to change the connection code that posts temperature readings using REST. The changes were very few, but included:

  • Dropping the support for custom headers, which I was injecting in the message sent to the Service Bus (example: sensor id). I had to move this information into the message payload itself;
  • Changing the URL to which the message is posted, including the API version. To write to SB I was using https://myservice.servicebus.windows.net/queuename/messages?timeout=60&api-version=2015-01 and had to change this to: https://myservice.servicebus.windows.net/eventhubname/messages?timeout=60&api-version=2014-05 .

Reading the Temperature – closing the Stream

The code that reads from the sensor, in Jan Tielens’ code (and other similar code found on the net), doesn’t allow for repeat readings in a loop. This line of code:

var w1slavetext = deviceDir.GetFiles("w1_slave").FirstOrDefault().OpenText().ReadToEnd();

… actually leaves a text stream open (StreamReader class), that has to be closed for repeat readings to work. So that was another fix.

The main loop – time between readings

My application is simple console application implementing a while(true) loop, that does this:

  • Read a temperature value from the sensor
  • Send a message to an Event Hub, by doing an HTTP post of a JSON-serialized message
  • Wait for 60 seconds with Thread.Sleep

One thing I noticed was that the readings were spaced, not 60 seconds, but 60+something. This “something”, usually 2-3 seconds, were obviously caused by the time the first two steps took. So to fix this I created a System.Diagnostics.Stopwatch at the start of the main loop, and at its end waited for 60 seconds minus the time it took for the first 2 operations to execute.

Now the readings were close enough (to a few milliseconds) to one every minute. Simple fix, simple mistake to make.

The main loop – long operations

The previous solution has a problem, which I quickly found out about. After running for a few hours, I had some posts to the Event Hubs that took a long time. More than 60 seconds. Maybe the cause was some Wifi problem, or network issue, don’t know. But what this meant was that I was calling Thread.Sleep with a negative value, which crashed the app. So another fix: if the operations took more than 60 seconds, don’t sleep and do another temperature reading immediately.

The main loop – SHA tokens’ lifetime

At the top of the app, before the main loop, the first thing I do is to create a SHA token used to connect to the event hub. This token has a lifetime, which I think is one hour by default. So, as you can expect: after one hour of reading temperature (60 readings), the SHA token expired, and sending the message failed with a 401 (permission denied), and I had an exception that stopped the app. Back to the code, another simple fix: wrap the sending of the message to the eventhub (which uses the WebClient class) with a try/catch, and when I find a WebException with 401 as the error code, recreate the SHA token and send the message again.

The main loop – the all encompassing try-catch

The last fix I did after I started having unhandled exceptions which I am not sure are due to the Pi2, Mono, network, whatever: I just wrapped the code inside the main loop inside a general try-catch, logged any error to the console output, and continue the loop execution. A “just in case” solution.

Finally, getting information about the device

This is not specific to the handing of the readings themselves, but I think is relevant. In the payload of my messages I wanted to include some information specific to the device, and found out I could find this information by reading from some devices/streams provided by the Raspian OS. I dug into some samples in the net, and ended up doing code that gets both the serial and the model name. The OS calls I do, using the Process/ProcessStartInfo classes as a way to get into bash are:

cat /proc/cpuinfo | grep Serial | awk '{print $3}'   -- this gets you the device’s serial, for example “00000000f5b55a06”

cat /proc/cpuinfo | grep 'model name' | head -n 1 – this gets you a string from where you can extract the model name, for example “ARMv7 Processor rev 5 (v7l)”

 

I’m still cleaning up the code and making sure it’s stable, but I’ll post it to github pretty soon. Contact me if want to see it sooner. Anyway, what I did already realize is that the colder time of the day, in my place at least, is between 22:00 and 02:00, which surprised me, and the temperature variation is about 4 Celcius on average.  Interesting info!

Tuesday, June 16, 2015

«The Azure App Service Architecture» @ Microsoft Developer TechRefresh 2015–Lisboa

Yesterday we had another Developer TechRefresh at Microsoft Lisboa, where I and my colleague André Vala both presented sessions. My session was a repeat of the Build 2015 session of the same name, presenting the architecture and demoing the two new main components of the new Azure App Service: API Apps and Logic Apps. The first especially I would say are almost ready for prime-time, and both of them are a very good play by Microsoft in the micro-services/mashup space. Very interesting technology, although obviously not everything is finished yet and there are some issues. 

Session slides are available on slideshare. The video of the session was recorded, will link to it when available.

PS: I just wished Microsoft fixed/replaced the “new” Azure portal, I still have frequent errors using it.

Friday, May 22, 2015

ITARC15 Architecting a Large Software Project - Lessons Learned

This morning I presented my “Lessons Learned” workshop at ITARC 2015 in Stockholm, Sweden. This session had previously been presented at Netponto, and was improved with more content targeted at software architects and also updated with more current information. The goal of the workshop (3,5 hours!) is to share experiences and discuss approaches in developing complex software projects. I had great feedback from the participants, and provocative and relevant questions.

One issue that did come up is the definition of “Large”: this was a large project for Portugal’s standards, plus it was complex and took a long time until release. But for local Swedish standards, it wasn’t that “large” Smile. Even so, the contents are general enough to be interesting, or so I was told. Also it was interesting to learn about some cultural differences between Portugal and Sweden – and those were much less than would be expected.

Great session, loved doing it. Here’s the slidedeck, for those interested.

Monday, December 1, 2014

«Architecting a Large Software Project - Lessons Learned» @ Netponto 50th Meeting - Lisboa 22/Nov

Two weeks ago I presented a session at the 50th meeting of the Lisbon Netponto Group, the largest community of .Net development in Lisboa. This two-hour session, which was filled with real examples, described the lessons learned in a 3-year project I was involved in as a Software Architect, and which had its first release this last summer and has seen early success in the customer organization. The compilation of lessons include the feedback of the developers in the team, and was a huge learning experience.

The slides include architecture aspects, technical aspects, as well as negotiation and functional hints. It’s not meant to be an absolute best-practices architecture guide, it’s only the result of this very specific project.

You can check/download the deck here.

Special thanks to NetPonto for the invitation, and to the audience for the participation and great feeedback in this LONG session :).