A couple of weeks ago, I was at the first conference I’ve ever attended: Microsoft’s Future Decoded. Hosted at ExCeL in London, it was a two day event that looked at where technology is heading and how, in some cases, the future is already here.
Although there were many different official tracks (with names such as “Empower Employees”, “Grow Culture” and “Transform Products”), the overarching theme of the conference seemed to be artificial intelligence (AI), ethics and accessibility. Curiously, although mixed reality was referenced a few times in the keynotes and talks, it didn’t seem to me like it was a major focus for the event which was odd considering the direction that Microsoft are travelling in with their work on the HoloLens (which, as a side note, I finally got to try in person!).
Microsoft was not a company that I would have immediately associated with AI, so I was curious to see what work they were doing in the area. As it turns out the answer largely relates to Azure, where Microsoft uses it internally to power certain services (like Azure Search and Azure Cognitive Services) along with providing the architecture on which individuals can build AI solutions (such as Azure Machine Learning Service).
If I were to condense the overarching message on AI from Microsoft and the other speakers at Future Decoded, it would pretty much be “start using AI now or you’ll be left behind”. Now this sort of sentiment isn’t new and I’m pretty sure I’ve heard it before for technology that is no longer in use anymore, but when it comes from a big company like Microsoft there’s certainly a bit more weight behind it.
One of the things that genuinely impressed me was how maturely the topic of AI was approached during the whole conference. I was honestly expecting the talks and keynotes to point to AI and claim that this technology was the be-all and end-all solution to all of your problems, that every project should start with AI and go from there. Instead the talks were very balanced about the topic, discussing not only the potential benefits but also pointing out the drawbacks. My favourite quote of the whole event that summed up this approach was written on a slide and attributed to Miguel Alvarez of AnalogFolk. It said:
When AI first comes up as a solution we start with a “no”, and then maybe it turns into a “maybe.” Only when we determine it is an impossible problem then do we really consider AI.
The thing that I really love about this quote is that it flips the “make everything AI” approach around to what essentially amounts to “make nothing AI unless we really need to”. I’ve aired my concerns about these black box systems being used for decision making before so it’s nice to see that even at a level where it would be beneficial to push AI as much as possible, the community is showing restraint.
There are many reasons that AI needs to be approached in a sensible way, but two stand out to me in particular. First of all there are many examples of AI acting in a biased way, one of the most recent being where an Amazon AI that used historical hiring data discriminated against women. A technology that has the potential to do good but also the potential to do harm should be handled delicately and reviewed often. The other reason is one of public perception; the general public are starting to see AI discussed more and are beginning to experience it themselves in their day to day lives. If the perception of AI changes from it being a useful piece of technology to something evil or immoral, the chance of wider adoption is slim and it means that we’d lose out on the potential gains that AI could give us in the future.
Along with the discussion of ethics relating specifically to AI, I visited a talk hosted by Tim Difford and Richard Potter with the wonderful title of “AI’s well that ends well: What Shakespeare can teach us about the effect of bias on diversity and inclusion?“. This was my favourite talk of the whole event because it managed to get across a complex and nuanced topic (the effect of bias on diversity and inclusion) in a way that was very easy to understand. You didn’t need to know anything about Shakespeare’s plays beforehand; one speaker would give you a brief synopsis of the next play while the other speaker pulled unwitting members out of the audience to join in. After a brief scene with the audience members and some wonderfully “rustic” props, Richard and Tim would then explain what kind of bias was shown in the scene; pre-existing, technical or emergent. This was then tied back into how these biases can effect us in our everyday life and what we can do to avoid them.
I’ve been interested in accessibility for a while, so I was very excited to see the talk “A practical guide to building a more accessible workplace” by Neil Milliken and Hector Minto. This talk did a great job of discussing both why accessibility should be important to everyone as well as some practical advice for how we can all start making our workplace and the content that we produce (emails, presentations, documents etc.) accessible for everyone.
The main thing that I took away from this talk was that a small amount of effort (such as adding alt-text for images) can suddenly make your document useful to huge numbers of additional people who might have struggled to access it before.
Along with general principles, I learned a couple of things that can be applied straight away. First of all, there is a tool in Office 365 (displayed prominently on the “Review” tab in the ribbon) called “Check Accessibility” that scans your document for common issues such as missing alt-text on images, low contrast text and a host of other things that you can step through and fix one at a time. I’ve already added this tool to my workflow, making sure that I check every document I create with it.
The second thing is a tool that can be used to detect age, racial or gender bias which seems fantastically useful, although I’ve been having trouble setting my system up to detect racial or age bias. Compared to “Check Accessibility” this one requires more work as it’s not just a case of going through a list of items that need fixing, you need to understand how your writing was biased and fix it. I’ve tried my best to avoid non-inclusive language in the past, but it’s incredibly useful to know that there’s a tool that can help me catch instances that I might have missed.
There was something that I really appreciated throughout the entire event and that was the very prominent live captioning for every talk and keynote. It wasn’t on a separate device, it wasn’t an app you had to download and configure, it was an inherent part of the whole experience. Although I don’t suffer from any hearing loss, I do struggle sometimes in parsing and understanding speech from people doing keynotes or talks, especially when the speech is amplified around a room. The live captioning helped me catch sections that I might have otherwise missed, which was greatly appreciated especially at a technical event where missing a few words can cause you to be lost for the rest of the talk. Combine this with live video of the speaker that could only be described as “absolutely massive” and I found it very easy to hear and see the speaker no matter where I was in the room.
I sent a tweet out to Microsoft to see if I could get some information on whether the live captioning system was auto-generated or if it was human-driven, but as I write this I haven’t had an official response. One of the keynotes briefly mentioned that there was a person “hard at work writing [the captions]”, but I saw a few instances that seemed indicative of voice processing such as a single word being split up into multiple words that sound similar to the original word when spoken together, but don’t make sense in context. Either way, whatever the specifics were, it held up well enough throughout the conference to be helpful.
Not only was accessibility found in the talks and keynotes, it was found out on the expo floor as well. I’m not a wheelchair user so I can’t comment from a position of experience, but I did notice that every stand in the expo (which were all slightly raised off the ground) had a wheelchair ramp and that every room had space specifically designated for wheelchairs, which was a positive to see.
As good as the overall accessibility story was at Future Decoded, there were a couple of issues. For example, although the live captioning was an excellent addition, I did see some issues with caption accuracy as well as captions that raced by much too fast to read as they were trying to catch up with the speaker. On the vendor side, I saw quite a few stands that used rather tall tables or had filled most of the floor space on their stand, both of which meant reduced accessibility for wheelchair users. I don’t know that I can fault Microsoft directly for this as I’m not sure whether they issued any accessibility documentation to the vendors or not, but it’s something that I noticed that could be improved or enforced better in the future.
There was one talk that I went to by Richard ‘Tricky’ Bassett called “Expo Theatre: test and learn (with humans)“. This talk doesn’t fit nicely into any of the other sections in the rest of this post, but I thought that it was worth mentioning. This talk discussed using humans to test things that didn’t exist yet; as an example, Richard told the story of a bank that wanted to see how well people would interact with a chatbot on their service. Rather than going through the effort to develop a chatbot first and then test it afterwards they had a “Wizard of Oz” type situation where a human sat in the next room on a laptop, responding in place of the chatbot. This helped the bank generate useful data without the upfront costs, as well as helping influence how the chatbot would be built to better target their needs! The idea itself is simple, but it’s a very clever one and I’ll definitely be looking to use it in the future.
I would be completely remiss if I didn’t mention that the people at Future Decoded were wonderful. Everyone working the booths in the expo hall were very friendly, the speakers were all very accommodating when I wanted to chat with them after their talks and most importantly, I met a number of other attendees that really made the event what it was. I’ve heard advice for conferences that said to talk to as many people as you can and after my experiences at Future Decoded, I completely agree.
One of the only issues that I had with the whole event was how little time there was between the various talks and keynotes. There was a small period of time in the mornings and at the end of the first day where you could visit the expo, but if you had a full schedule (which I did as I booked in a talk for every slot) you often only between 15 and 30 minutes to get from one talk to the next as well as to get a drink, go to the bathroom etc. For the more popular talks, you also had to contend with a couple hundred people making the same journey.
Since this was my first conference it’s very possible that I might have oversubscribed to the talks or that I’m just not used to how fast-paced they are, but I’ll have to see what it’s like when I go to my next one.
All in all, Future Decoded was a fantastic event and I greatly enjoyed it. If you’re looking for an event that is super technical and discussed implementation, this might not be for you as it is decidedly higher level. If, however, you’re looking to learn more about technology trends and to check out some cool gadgets, I don’t think you’ll be disappointed.
2 thoughts on “Future Decoded 2018”
TL:DR version? 😉 just kidding, great stuff Matt
LikeLiked by 1 person
Superb mate. Really great article. Glad you enjoyed the show.
LikeLiked by 1 person