Monkigras 2014: Sharing craft

After Monkigras 2013, I was really looking forward to Monkigras 2014. The great talks about developer culture and creating usable software, the amazing buzz and friendliness of the event, the wonderful lack of choice over which talks to go to (there’s just one track!!), and (of course) the catering:

coffee
cheese

The talks at Monkigras 2014

The talks were pretty much all great so I’m just going to mention the talks that were particularly relevant to me.

Rafe Colburn from Etsy talked about how to motivate developers to fix bugs (IBMers, read ‘defects’) when there’s a big backlog of bugs to fix. They’d tried many strategies, including bug rotation, but none worked. The answer, they found, was to ask their support team to help prioritise the bugs based on the problems that users actually cared about. That way, the developers fixing the bugs weren’t overwhelmed by the sheer numbers to choose from. Also, when they’d done a fix, the developers could feel that they’d made a difference to the user experience of the software.

Rafe Colburn from Etsy
Rafe Colburn from Etsy

While I’m not responsible for motivating developers to fix bugs, my job does involve persuading developers to write articles or sample code for WASdev.net. So I figure I could learn a few tricks.

A couple of talks that were directly applicable to me were Steve Pousty‘s talk on how to be a developer evangelist and Dawn Foster‘s on taking lessons on community from science fiction. The latter was a quick look through various science fiction themes and novels applied to developer communities, which was a neat idea though I wished I’d read more of the novels she cited. I was particularly interested in Steve’s talk because I’d seen him speak last year about how his PhD in Ecology had helped him understand communities as ecosystems in which there are sometimes surprising dependencies. This year, he ran through a checklist of attributes to look for when hiring a developer evangelist. Although I’m not strictly a developer evangelist, there’s enough overlap with my role to make me pay attention and check myself against each one.

Dawn Foster from PuppetLabs
Dawn Foster from PuppetLabs

One of the risks of TED Talk-style talks is that if you don’t quite match up to the ‘right answers’ espoused by the speakers, you could come away from the event feeling inadequate. The friendly atmosphere of Monkigras, and the fact that some speakers directly contradicted each other, meant that this was unlikely to happen.

It was still refreshing, however, to listen to Theo Schlossnagle basically telling people to do what they find works in their context. Companies are different and different things work for different companies. Similarly, developers are people and people learn in different ways so developers learn in different ways. He focused on how to tell stories about your own failures to help people learn and to save them from having to make the same mistakes.

Again, this was refreshing to hear because speakers often tell you how you should do something and how it worked for them. They skim over the things that went wrong and end up convincing you that if only you immediately start doing things their way, you’ll have instant success. Or that inadequacy just kicks in like when you read certain people’s Facebook statuses. Theo’s point was that it’s far more useful from a learning perspective to hear about the things that went wrong for them. Not in a morbid, defeatist way (that way lies only self-pity and fear) but as a story in which things go wrong but are righted by the end. I liked that.

Theo Schlossnagle from Circonus
Theo Schlossnagle from Circonus

Ana Nelson (geek conference buddy and friend) also talked about storytelling. Her point was more about telling the right story well so that people believe it rather than believing lies, which are often much more intuitive and fun to believe. She impressively wove together an argument built on various fields of research including Psychology, Philosophy, and Statistics. In a nutshell, the kind of simplistic headlines newspapers often publish are much more intuitive and attractive because they fit in with our existing beliefs more easily than the usually more complicated story behind the headlines.

Ana Nelson from Brick Alloy
Ana Nelson from Brick Alloy

The Gentle Author spoke just before lunch about his daily blog in which he documents stories from local people. I was lucky enough to win one of his signed books, which is beautiful and engrossing. Here it is with my swagbag:

After his popular talk last year, Phil Gilbert of IBM returned to give an update on how things are going with Design@IBM. Theo’s point about context of a company being important is so relevant when trying to change the culture of such a large company. He introduced a new card game that you can use to help teach people what it’s like to be a designer working within the constraints of a real software project. I heard a fair amount of interest from non-IBMers who were keen for a copy of the cards to be made available outside IBM.

Phil Gilbert's Wild Ducks card game
Phil Gilbert’s Wild Ducks card game

On the UX theme, I loved Leisa Reichelt‘s talk about introducing user research to the development teams at GDS. While all areas of UX can struggle to get taken seriously, user research (eg interviewing participants and usability testing) is often overlooked because it doesn’t produce visual designs or code. Leisa’s talk was wonderfully practical in how she related her experiences at GDS of proving the worth of user research to the extent that the number of user researchers has greatly increased.

And lastly I must mention Project Andiamo, which was born at Monkigras 2013 after watching a talk about laser scanning and 3D printing old railway trains. The project aims to produce medical orthotics, like splints and braces, by laser scanning the patient’s body and then 3D printing the part. This not only makes the whole process much quicker and more comfortable, it is at a fraction of the cost of the way that orthotics are currently made.

Samiya Parvez & Naveed Parvez of Project Andiamo
Samiya Parvez & Naveed Parvez of Project Andiamo

If you can help in any way, take a look at their website and get in touch with them. Samiya and Naveed’s talk was an amazing example of how a well-constructed story can get a powerful message across to its listeners:

After Monkigras 2014, I’m now really looking forward to Monkigras 2015.


 

The post Monkigras 2014: Sharing craft appeared first on LauraCowen.co.uk.

Node-RED Flows that make flows

Executive Summary

When a mummy function node and a daddy function node love each other very much, the daddy node injects a payload into the mummy node through a wire between them, and a new baby message flow is made, which comes out of mummy’s output terminal. The baby’s name is Jason.

Where did this idea come from?

In Node-RED, if you export some nodes to the clipboard, you will see the flow and its connections use a JSON wire format.

For a few weeks I’ve been thinking about generating that JSON programmatically, in order to create flows which encode various input and output capabilities and functions, according to some input parameters which give the specific details of what the flow is to do.

I think there’s been an alignment of planets that’s occurred, which led to this idea:

@stef_k_uk has been talking to me about deploying applications onto “slave” Raspberry Pis from a master Pi.

Dritan Kaleshi and Terence Song from Bristol University have been discussing using the Node-RED GUI as a configurator tool for a system we’re working on together, and also we’ve been working on representing HyperCat catalogues in Node-RED, as part of our Technology Strategy Board Internet of Things project, IoT-Bay.

@rocketengines and I were cooking up cool ideas for cross-compiling Node-RED flows into other languages, a while back, and

in a very productive ideas afternoon last week, @profechem and I stumbled upon an idea for carrying the parser for a data stream along with the data itself, as part of its meta-data description.

All of the above led me to think “wouldn’t it be cool if you could write a flow in Node-RED which produced as output another flow.” And for this week’s #ThinkFriday afternoon, I gave it a try.

First experiment

The first experiment was a flow which had an injector node feeding a JSON object of parameters into a flow which generated the JSON for an MQTT input node, a function node to process the incoming data in some way, and an MQTT output node. So it was the classic subscribe – process – publish pattern which occurs so often in our everday lives (if you live the kind of life that I do!).
first Node-RED flowAnd here’s the Node-RED flow which generates that: flow-generator

So if you sent in
{
"broker": "localhost",
"input": "source_topic",
"output": "destination_topic",
"process": "msg.payload = \"* \"+msg.payload+\" *\";\nreturn msg;"
}

the resulting JSON would be the wire format for a three-node flow which has an MQTT input node subscribed to “source_topic” on the broker on “localhost”, a function node which applies a transformation to the data: in this case, wrapping it with an asterisk at each end, and finally an MQTT publish node sending it to “destination_topic” on “localhost”.
N.B. make sure you escape any double quotes in the “process” string, as shown above.

The JSON appears in the debug window. If you highlight it, right-mouse – Copy, and then do Import from… Clipboard in Node-RED and ctrl-V the JSON into the text box and click OK, you get the three node flow described above ready to park on your Node-RED worksheet and then Deploy.
the resulting And it works!!

So what?

So far so cool. But what can we do with it?

The next insight was that the configuration message (supplied by the injector) could come from somewhere else. An MQTT topic, for example. So now we have the ability for a data stream to be accompanied not only by meta-data describing what it is, but also have the code which parses it.
flow with added MQTT configuratorMy thinking is that if you subscribe to a data topic, say:
andy/house/kitchen/temperature
there could be two additional topics, published “retained” so you get them when you first subscribe, and then any updates thereafter:

A metadata topic which describes, in human and/or machine readable form, what the data is about, for example:
andy/house/kitchen/temperature/meta with content
“temperature in degrees Celcius in the kitchen at Andy’s house”

And a parser topic which contains the code which enables the data to be parsed:
andy/house/kitchen/temperature/parser with content
msg.value = Math.round(msg.payload) + \" C\"; return msg;
(that’s probably a rubbish example of a useful parser, but it’s just for illustration!)

If you’re storing your data in a HyperCat metadata catalogue (and you should think about doing so if you’re not – see @pilgrimbeart‘s excellent HyperCat in 15 minutes presentation), then the catalogue entry for the data point could include the URI of the parser function along with the other meta-data.

And then…

Now things get really interesting… what if we could deploy that flow we’ve created to a node.js run-time and set it running, as if we’d created the flow by hand in the Node-RED GUI and clicked “Deploy”?
Well we can!

When you click Deploy, the Node-RED GUI does an HTTP POST to “/flows” in the node.js run-time that’s running the Node-RED server (red.js), and sends it the list of JSON objects which describe the flow that you’ve made.
So… if we hang an HTTP request node off the end of the flow which generates the JSON for our little flow, then it should look like a Deploy from a GUI.

Et voila!
flow that deploys to a remote Node-RED
Note that you have to be careful not to nuke your flow-generating-flow by posting to your own Node-RED run-time! I am posting the JSON to Node-RED on my nearby Raspberry Pi. When you publish a configuration message to the configuration topic of this flow, the appropriate set of nodes is created – input – process – output – and then deployed to Node-RED on the Pi, which dutifully starts running the flow, subscribing to the specified topic, transforming the data according to the prescribed processing function, and publishing it to the specified output topic.

I have to say, I think this is all rather exciting!

@andysc

Footnote:

It’s worth mentioning, that Node-RED generates unique IDs for nodes that look like “8cf45583.109bf8″. I’m not sure how it does that, so I went for a simple monotonically increasing number instead (1, 2 …). It seems to work fine, but there might be good reasons (which I’m sure @knolleary will tell me about) why I should do it the “proper” way.

At ThingMonk 2013

I attended ThingMonk 2013 conference partly because IBM’s doing a load of work around the Internet of Things (IoT). I figured it would be useful to find out what’s happening in the world of IoT at the moment. Also, I knew that, as a *Monk production, the food would be amazing.

What is the Internet of Things?

If you’re reading this, you’re familiar with using devices to access information, communicate, buy things, and so on over the Internet. The Internet of Things, at a superficial level, is just taking the humans out of the process. So, for example, if your washing machine were connected to the Internet, it could automatically book a service engineer if it detects a fault.

I say ‘at a superficial level’ because there are obviously still issues relevant to humans in an automated process. It matters that the automatically-scheduled appointment is convenient for the householder. And it matters that the householder trusts that the machine really is faulty when it says it is and that it’s not the manufacturer just calling out a service engineer to make money.

This is how James Governor of RedMonk, who conceived and hosted ThingMonk 2013, explains IoT:

What is ThingMonk 2013?

ThingMonk 2013 was a fun two-day conference in London. On Monday was a hackday with spontaneous lightning talks and on Tuesday were the scheduled talks and the evening party. I wasn’t able to attend Monday’s hackday so you’ll have to read someone else’s write-up about that (you could try Josie Messa’s, for instance).

The talks

I bought my Arduino getting started kit (which I used for my Christmas lights energy project in 2010) from Tinker London so I was pleased to finally meet Tinker’s former-CEO, Alexandra Dechamps-Sonsino, at ThingMonk 2013. I’ve known her on Twitter for about 4 years but we’d never met in person. Alex is also founder of the Good Night Lamp, which I blogged about earlier this year. She talked at ThingMonk about “the past, present and future of the Internet of Things” from her position of being part of it.

ThingMonk 2013: Alexandra Deschamps-Sonsino, @iotwatch
Alexandra Deschamps-Sonsino, @iotwatch

I think it was probably Nick O’Leary who first introduced me to the Arduino, many moons ago over cups of tea at work. He spoke at ThingMonk about wiring the Internet of Things. This included a demo of his latest project, NodeRED, which he and IBM have recently open sourced on GitHub.

ThingMonk 2013: Nick O'Leary
Nick O’Leary talks about wiring the Internet of Things

Sadly I missed the previous day when it seems Nick and colleagues, Dave C-J and Andy S-C, won over many of the hackday attendees to the view that IBM’s MQTT and NodeRED are the coolest things known to developerkind right now. So many people mentioned one or both of them throughout the day. One developer told me he didn’t know why he’d not tried MQTT 4 years ago. He also seemed interested in playing with NodeRED, just as soon as the shock that IBM produces cool things for developers had worn off.

Ian Skerrett from Eclipse talked about the role of Open Source in the Internet of Things. Eclipse has recently started the Paho project, which focuses on open source implementations of the standards and protocols used in IoT. The project includes IBM’s Really Small Message Broker and Roger Light’s Mosquitto.

ThingMonk 2013: Ian Skerrett from Eclipse
Ian Skerrett from Eclipse

Andy Piper talked about the role of signals in the IoT.

IMG_1546

There were a couple of talks about people’s experiences of startups producing physical objects compared with producing software. Tom Taylor talked about setting up Newspaper Club, which is a site where you can put together and get printed your own newspaper run. His presentation included this slide:

ThingMonk 2013: Best. Slide. Ever.
Best. Slide. Ever.

Matt Webb talked about producing Little Printer, which is an internet-connected device that subscribes to various sources and prints them for you on a strip of paper like a shop receipt.

ThingMonk 2013: Matt Webb
Matt Webb

Patrick Bergel made the very good point in his talk that a lot of IoT projects, at the moment, are aimed at ‘non-problems’. While fun and useful for learning what we can do with IoT technologies, they don’t really address the needs of real people (ie people who aren’t “hackers, hipsters, or weirdos”). For instance, there are increasing numbers of older people who could benefit from things that address problems social isolation, dementia, blindness, and physical and cognitive impairments. His point was underscored throughout the day by examples of fun-but-not-entirely-useful-as-is projects, such as flying a drone with fruit. That’s not to say such projects are a waste of time in themselves but that we should get moving on projects that address real problems too.

ThingMonk 2013: Patrick Bergel
Patrick Bergel, @goodmachine, on Thingdom Come

The talk which chimed the most with me, though, was Claire Rowland‘s on the important user experience UX issues around IoT. She spoke about the importance of understanding how users (householders) make sense of automated things in their homes.

IMG_1587

The book

I bought a copy of Adrian McEwan‘s Designing The Internet of Things book from Alex’s pop-up shop, (Works)shop. Adrian’s a regular at OggCamp and kindly agreed to sign my copy of his book for me.

ThingMonk 2013: Adrian McEwan
Adrian McEwan and the glamorous life of literary reknown.

The food

The food was, as expected, amazing. I’ve never had bacon and scrambled egg butties that melt in the mouth before. The steak and Guinness casserole for lunch was beyond words. The evening party was sustained with sushi and tasty curry.

ThingMonk 2013: @monkchips
Mr Monk himself, @monkchips (or James Governor, as his parents named him).

Thanks, James!

The post At ThingMonk 2013 appeared first on LauraCowen.co.uk.

The Ambient Kettle

Back in 2007, my Mum and I got a pair of Internet-connected Nabaztag bunnies. Aside from all the online content we could subscribe to using the bunnies, the most fun thing for me was that we could ‘pair’ our bunnies so that they would talk to each other. If I moved the ears on my bunny, the ears on my Mum’s bunny would move to match, and vice versa. The 250 physical miles disappear for a few seconds when you see the ears move and know that it’s because Mum is physically moving the ears of her bunny. I know exactly what she’s doing at that particular pointing in time, as if we’re briefly in the same room. The technical term for this is, apparently, ambient awareness.

My Nabaztag bunny
My Nabaztag bunny

The bunny ears experience of ambient awareness inspired my first (and, so far, only) Arduino project: Monitoring electricity using Christmas lights. The red/orange lights indicated the current electricity usage of my house and the blue/green lights indicated the current electricity usage of Mum and Dad’s house. The more electricity currently being used, the faster the lights flashed. Again, it was just that tiny tiny insight into what was happening 250 miles away. Just the mundanity of everyday life shared.

So I was curious about the Kickstarter project for the Good Night Lamp. The Good Night Lamp is a really nice and simple concept. One person has a Big Lamp (shaped like a  house) and they give Little Lamps, associated with the Big Lamp, to friends and/or family anywhere in the world. When the owner switches off the Big Lamp (when they go out or go to bed), the associated Little Lamps also switch off. An appealing part of it is that you can collect a Little Lamp from each of your family or group of friends and arrange them on a shelf so that before you go to bed at night, you can see each of them ‘say goodnight’ as their respective lights go out.

Good Night Lamp
Good Night Lamp

The problem I see with the Good Night Lamp is similar to the one with the Nabaztag. While I think it’s great having simple devices that do just one thing well, it doesn’t half clutter up the place. These kinds of devices need shelf-space. And it has to be shelf-space you can see easily in a place you’ll often be or they don’t work. Maybe as people replace all their books with the more easily stored ebooks, living-room bookcases will become filled with ambient devices instead. I got to chatting with Ambient Orb fan Andy Stanford-Clark about it.

While my and my Mum’s’ Nabaztags have now died or gone into hibernation and the Christmas lights never made it as far as the tree, our more lasting providers of ambient awareness don’t even have their own physical forms. Instead, they’re software on our smartphones and tablets, devices that we have around anyway, wherever we are. In particular, SMS updates of my Mum and Dad’s Tweets.

Every morning, my Mum wakes up, has a coffee with my Dad, and reads interesting articles on her iPad. I know this from when I’ve visited them and because when she reads an interesting article, she tweets or retweets it and I receive about half-a-dozen txts in quick succession. Later in the afternoon, after they’ve got home from wherever they’ve been that day (or have found free wifi somewhere while they’re out) and are drinking another cup of coffee or tea, I receive another half-a-dozen txts pointing to interesting articles online. Just receiving the txts gives me an awareness of them waking up or sitting down to read the paper. Clicking the links to the articles gives me an insight into what they’re reading and how they’re probably feeling about the topics of the articles. The fairly mundane, everyday things that we wouldn’t remember, or bother, to talk about on the phone a week or so later.

As drinking coffee or tea seems to play a regular, if side, part in the activities I’m notified about, Andy and I came up with the idea of the Ambient Kettle. In my house, we have a whole house Current Cost monitor that is connected to a server out on the Internet. It was the feed from this server that we used in my Christmas Lights project. Since then, though, I’ve added individual appliance monitors (IAMs) to a few of the appliances around the house, including the kettle. The feeds from these IAMs also go to the server and so can be used by applications that know which data to request.

So Andy hacked up a (private) Twitter account, @ambientkettle, which my Mum follows. Each time the kettle boils in my house, the @ambientkettle account tweets to my Mum:

@ambientkettle tweets
@ambientkettle tweets

Without being physically present or explicitly letting her know that I am making a cup of tea, she can get a sense of what I’m doing. The messages in the tweets that @ambientkettle sends are pre-canned and chosen at random but made to be chatty enough that it seems a bit like the start of a conversation. Indeed, Mum sometimes tweets back to it to say that she and Dad are also having a cup of tea or are looking forward to one when they get home, or whatever. As I say, it’s mundane but it’s those kinds of mundane things that make everyday life.

I’ll be interested to see how the Good Night Lamp gets taken up. It was featured in the very mainstream Daily Mail yesterday and its founding team has a good record of startups, product design, interaction design, and Internet of Things creativeness. And there’s something very appealing about having ambient awareness of friends and family when we’re geographically spread apart.

The post The Ambient Kettle appeared first on LauraCowen.co.uk.

Monkigras 2013: Scaling craft

Monkigras goody bag
Monkigras goody bag

The work of William Morris, my GCSE history teacher said, was a bit of a moral dilemma. Morris was a British designer born during the Industrial Revolution. British (and then world) industry was moving rapidly towards mass production by replacing traditional, cottage-industry production processes with the more efficient, and therefore profitable, machines. One thing that suffered under this move to mass production was the focus on function and quantity over decoration and quality. Morris reacted against this by designing and producing decorations like wallpaper and textiles using the traditional craft techniques of skilled craftspeople. My history teacher’s point was that although Morris, a passionate socialist, was able to create high quality goods by using smaller-scale production methods, only wealthy people could afford to buy his designs; which was hardly equality in action. On the other hand, the skills of craftspeople were being retained, quality goods were being produced, and the craftspeople were getting paid for that quality of their work.

My pretty, handcrafted latte
My pretty, handcrafted latte

Monkigras 2013, in London last week, took on this theme of ‘scaling craft’ in the context of beer, coffee, and software. All parts of this trinity of software development can benefit hugely from a focus on quality over quantity. Before I went to Monkigras, I wasn’t really sure what to expect from a tech event advertised as having a lot of beer. It did have a lot of beer (and coffee) available but if you didn’t want it you could avoid it (several people I talked to said they didn’t usually drink beer). And no one seemed to get ridiculously drunk. And there were a lot of very cool talks.

The beer was also a fun analogy to apply to software development. Despite pubs in the UK closing hand over fist at the moment, microbreweries are on the rise. Microbrewing is about producing beer in small quantities on a commercial basis so that quality can be maintained whilst still viable as a business. One of the things we learnt from a brewer at Monkigras is that the taste of water varies according to where it comes from. Water is a major component of beer so if the taste of your water supply changes, the taste of your beer changes. To maintain the quality of the beer you brew, you must work within the natural resources available to you and not over-expand. Similarly, quality comes from skilled and knowledgeable people who need to be paid for their skill. If you take on cheaper staff and train them less so that you can make more profit, you will end up with a poorer quality product. You get the idea.

Handcrafting a wooden spoon.
Handcrafting a wooden spoon.

This principle applies to all areas of craft, whether it’s producing quality coffee, a quality wooden spoon, quality conference food, or organising a quality conference, you have to focus on quality and ensure that if you scale what you do so that it’s more readily available to more people, you don’t sacrifice quality at the same time. And, importantly, that you know when to stop. Bigger doesn’t necessarily mean better.

Software is misleadingly easy to produce. Unlike making physical objects, there is very little initial cost to producing software; you can make copies and then distribute them to customers over the Internet at very little cost. Initially, at least, it’s all in the skill of the craftspeople and their ability to identify their target users and market. If they can’t make what people will buy, they will go out of business very quickly. As software development companies get larger, the people who make the software inside the company become further removed from the selling of their software to their customers. So they become more focused on what they are close to, the technology but not who will use it.

Phil Gilbert on IBM Design Thinking
Phil Gilbert on IBM Design Thinking

Phil Gilbert, IBM’s new General Manager of Design, comes from a 30-year career in startups, most recently Lombardi, where design was core to their culture. IBM has a portfolio of 3000 software products so, when Lombardi was acquired by IBM, Phil set about simplifying the IBM Business Process Management portfolio of products, reducing 21 different products to just four and kicking off a cultural change to bring design and thinking about users to the centre of product development. Whilst praising IBM’s history of design and a recent server product design award, he also acknowledged at Monkigras: “We are rethinking everything at IBM. Our portfolio is a mess today and we need to get better”. Changing a culture like IBM’s isn’t easy but I’ve seen and experienced a big difference already. Phil’s challenge is to scale the high-quality user-focused design values of a startup to a century-old global corporation.

One of the things that struck me most at Monkigras, and appealed to me most as a social scientist, was the focus on the human side. Despite it being a developer conference, I remember seeing only one slide that contained code. The overriding theme was about people and culture, not technology; how to maintain quality by maintaining a culture that respects its craftspeople and how to retain both even if the organisation gets bigger, even if that naturally limits how much the organisation can grow. Personal analogy was also a big thing…

Laser-scanned model of the engine
Laser-scanned model of the engine

Cyndi Mitchell from Logspace talked about her family’s hog farm and working within the available resources. Shanley Kane from Basho used Dante’s spheres to describe best product management practices. Steve Citron-Pousty from RedHat use his background as an ecologist to manage communities and ‘developer ecosystems’ (don’t just call it an ecosystem; treat it like one). Diane Mueller from ActiveState talked about her 20%-time project to build a crowdsourced database of totem poles and the challenges of understanding what gets people to want to contribute to such projects. Elco Jacobs talked about his BrewPi project: automatically managing the temperature of his homebrewing fridge using a RaspberryPi based controller, and how he has open-sourced to build a community to kick start it as a potential small business. Rafe Colburn from Etsy more directly makes the link between craft and software engineering in his slides.

3D printer making a spoon
3D printer making a spoon

I don’t know much about William Morris so I don’t know which presentations he would have enjoyed or disagreed with. Morris was a preservationist and started the Society for the Protection of Ancient Buildings to ensure that old buildings get repaired and not restored to an arbitrary point in the past. So maybe he would have found laser-scanning and 3D printing interesting. Chris Thorpe is a model train geek and likes to hand-make his own models of real-life objects. He too is interested in alternatives to mass manufacturing and has started to look at how to make model kits. He uses a laser to scan the objects and a 3D printer to prototype the models. He can then send the model to a commercial company who can make it into kits for him to sell. He has recently used his laser-scanning technique to scan a rediscovered old Welsh railway engine to preserve it, virtually at least, in the state in which it was found.

I had a great time with lots of cool and fun people. Well done to @monkchips for scaling a conference to just the right level of intimacy and buzz. The last thing I saw before I left was the craftsman making a wooden spoon pitted in competition against the 3D printer making a plastic spoon.

You can find many of the slide presentations and more about the conference Lanyrd.

The post Monkigras 2013: Scaling craft appeared first on LauraCowen.co.uk.

Emerging Technology Services Interviews

The British Computer Society recently came to Hursley to interview some of the members of Emerging Technology Services about some of the work we’ve been doing recently. The results, as ever in ETS, are really interesting so here is the set of video interviews reposted for all you Eightbar subscribers out there.

To kick things off we have Bharat Bedi, IBM Master Inventor, talking about his work on the Universal Information Framework. This is an innovative idea that allows secure interactions that could benefit, for example, banks:

Another piece from Bharat Bedi but this time talking about his work on the Living Safe project which runs in Balzano, Italy to help older residents who live by themselves:

Now something a little different from Kevin Brown, IBM Senior Inventor, talking about his work using a mind-reading headset. Here he gets Brian Runciman from the BCS to drive a car with his brain and trains him to run a brain wave reading headset:

Next up we have Dominic Harries, IBM Emerging Technologies Specialist, talking about some of his work using a multi-user multi-touch surface. Here Dominic is demonstrating the use of a business application on the multi-touch table:

Last, but not least we have Helen Bowyer, Emerging Technologies Manager, talking about her work on Automatic Sign Language. Helen explains and demonstrates the Say It, Sign It (SiSi) project which uses an avatar to translate spoken English into sign language.

The original content can be found at http://www.bcs.org/content/conWebDoc/44430.

Why Doctor Who Confidential mattered

Behind-the-scenes documentaries, like Doctor Who Confidential, matter. They matter because they show viewers, in particular children still deciding what to do with their lives, that it takes more to produce a high-class TV programme than just a few actors who become famous. It shows what other creative and/or technical jobs there are in television.

A couple of weekends ago, we went to the Doctor Who Official Convention (#dwcuk) in Cardiff. While one of the three main panels featured the three stars, Matt Smith, Karen Gillan and Arthur Darvill (along with executive producers Stephen Moffat and Caroline Skinner), most of the other scheduled events were focused on how Doctor Who is made.

Danny Hargreaves makes it snow indoors

At the very start of the day, we went to see Danny Hargreaves blow things up talk about the Special Effects on Doctor Who. In his Q&A session (after making it snow indoors), the first question asked was “How did you get into special effects work?” and, between questions like how he blew up the Torchwood Hub and how he makes the Doctor’s hands and head fiery during a regeneration, a later question was “When did you realise you wanted to work in special effects?”. Attendees were interested not just in the fictional stories and characters but in how the programme is made and the interesting careers they might not otherwise have come across.

Old harddrive on the TARDIS console to make the spinny thing spin.

Throughout the day, I heard audience members ask how to become costume and prosthetics designers and how to become script writers. Danny described how his team designs and creates the effects, assess the risks of blowing things up, and who they work with to make it all happen. He also explained how he came to be a trainee in the nascent world of special effects before studying Mechanical Engineering so that he could build the devices they need for Doctor Who (and the other shows he’s worked on, like Coronation Street). Directors of photography, set designers, executive producers, writers, and directors went on to talk about what their own jobs entailed day-to-day and how it all comes together to make an episode of Doctor Who.

These discussions continued the story that used to be told after each new episode of Doctor Who by Doctor Who Confidential on BBC3. Doctor Who Confidential started in 2005 with the return of Doctor Who. As well as talking about some interesting perspective on making that night’s episode of Doctor Who, it featured interviews with, and ‘day-in-the-life’ documentaries about, the actors (including showing the less glamorous side of shivering in tents and quilted coats between takes), the casting directors, the producers, the writers, the choreographers, the costume designers, the special effects supervisors, the monster designers, the prosthetics experts, the directors, the assistant directors, and many, many others. It also held competitions for children to write a mini episode and then see the process of making it, which would’ve been an amazing experience!

Yes, it took a slightly odd turn in the last series when it turned a bit Top Gear by showing Karen Gillan having a driving lesson and Arthur Darvill swimming with sharks; possibly a misguided attempt to increase its popularity before it got canned anyway to cut costs.

I think it’s a real shame to lose Doctor Who Confidential and its insights into the skill, hard work, and opportunities in TV and film production.


Cool photo of Danny in the snow by Tony Whitmore.

The post Why Doctor Who Confidential mattered appeared first on LauraCowen.co.uk.

Reflecting on our total home energy usage

The graph of our total gas usage per year doesn’t decrease quite so impressively as our electricity graph, which I blogged about halving over five years. Because the numbers were getting ridiculously big and difficult to compare at a glance, I’ve re-created the electricity graph here in terms of our average daily electricity usage instead of our annual usage (click the graph to see a larger version):

Graph of daily electricity usage per year.

 

If you compare it with the average daily gas usage graph below, you can see (just from the scales of the y-axes) that we use much more gas than electricity (except in 2007, which was an anomalous year because we didn’t have a gas fire during the winter so we used a electric halogen heater instead):

 

Graph of daily gas usage per year.

Our gas usage has come down overall since 2005 (from 11280 kWh in 2005 to 8660 kWh in 2011; or 31 kWh per day to 24 kWh per day on average) but not so dramatically as our electricity usage has. Between 2005 and 2011, we reduced our electricity usage by about a half  and our gas usage by about a quarter.

Gas, in our house, is used only for heating rooms and water. So if I were to chart the average outside temperatures of each year, they’d probably track reasonably closely to our gas usage. In 2005 (when we used an average of 31 kWh per day), we still had our old back boiler (with a lovely 1970s gas fire attached) which our central heating installer reckoned was about 50% efficient. In 2006 (26 kWh per day), we replaced it with a new condensing boiler (apparently 95% efficient) but didn’t replace the gas fire until mid-2007 (the dodgy year that doesn’t really count). In 2006, we also had the living-room (our most heated room) extended so it had a much better insulated outside wall, door, and window. These changes could explain the pattern of reducing gas usage year by year up till then.

Old boiler being removed

In 2009, January saw sub-zero temperatures and it snowed in November and December. I think that must be the reason why our usage for the whole year shot back up again, despite the new boiler, to 31 kWh per day. In 2010 (21 kWh per day), it was again very cold and snowy in January; I think the slight dip in gas usage that year compared with both 2008 (25 kWh per day) and 2011 (24 kWh per day) was down to a problem with the gas fire that meant we used the electric halogen heater again during the coldest month. In 2011 it snowed in January but was fairly mild for the rest of the year.

I think 2008, 2010, and 2011 probably represent ‘typical’ years of heating our house with its new boiler and gas fire. Like I concluded about reducing our electricity usage, I think our gas usage went down mostly by getting some better insulation and a more efficient boiler but we did also reduce the default temperature of our heating thermostat to about 17 degrees C (instead of 20 degrees C) a couple of years ago too (we increase it when we need to but it stays low if we don’t), which I think has made some difference but it’s hard to tell when our heating usage is so closely tied to the outside temperature. Also, we don’t currently have any way of separating out our water heating from our central heating, and our gas fire from the boiler.

Of course, what really matters overall is the total amount of energy we use (that is, the gas and electricity numbers combined). So I’ve made a graph of that too. Now we’re talking numbers like 48 kWh per day in 2005 to 33 kWh per day in 2011.

 

Graph of total daily energy usage per year.

Overall, that means we reduced our total energy usage by about one-third over seven years.


Thanks again to @andysc for helping create the graph from meter readings on irregular dates.

The post Reflecting on our total home energy usage appeared first on LauraCowen.co.uk.

Halving our electricity usage

I learnt something interesting today: between 2007 and 2011, we halved the amount of electricity we use in our house:

Total electricity usage per year (kWh)

In 2007, we used 6783 kWh of electricity (for electricity, a kilowatt hour is the same thing as a ‘unit’ on your bill). In 2011, by contrast, we used 3332 kWh (or ‘units’). 2007 was slightly on the high side (compared with 2006) because we had no gas fire in the living-room during the winter of 2006-7 so we’d used an electric oil heater during the coldest weeks (we don’t have central heating in that room) 1.

That’s an average of 19 kWh per day in 2007 compared with 9 kWh per day in 2011. Which is quite a difference. So what changed?

In early 2008, I got a plug-in Maplin meter (similar to this one) and one of the very early Current Cost monitors, which display in real-time how much electricity is being used in your whole house:

An classic Current Cost monitor

Aside from the fun of seeing the display numbers shoot up when we switched the kettle on, it informed us more usefully that when we went to bed at night or out to work, our house was still using about 350 Watts (which is 3066 kWh per year)2 of electricity. That’s when the house is pretty much doing nothing. Nothing, that is, apart from powering:

  • Fridge
  • Freezer
  • Boiler (gas combi boiler with an electricity supply)
  • Hob controls and clock
  • Microwave clock
  • Infrared outside light sensor
  • Print/file server (basically a PC)
  • Wireless access point
  • Firewall and Internet router
  • DAB clock radio
  • ADSL modem
  • MythTV box (homemade digital video recorder; basically another PC)

And that’s the thing, this ‘baseline’ often makes a lot of difference to how much electricity a house uses overall. 3066 kWh per year was 56% of 2007′s total electricity usage.

The first six items on that list draw less than 100 Watts (876 kWh per year) altogether. They’re the things that we can’t really switch off. But there were clearly things that we could do something about.

Over the next couple of years, we reduced our baseline by about 100 Watts by getting rid of some of the excessive computer kit, buying more efficient versions when we replaced the old print/file server and MythTV box, and replaced most of our lightbulbs with energy-efficient equivalents. We also, importantly, changed our habits a bit and just got more careful about switching lights off when we weren’t using them (which wouldn’t affect the baseline but does affect the overall energy usage), and switching off, say, the stereo amplifier when we’re not using it.

That brought our baseline down to about 230 Watts (2015 kWh per year), which is a lot better, though it’s still relatively high considering that the ‘essentials’ (eg fridge and freezer) contribute less than half of that.

And that’s about where we are now. We tended to make changes in fits and starts but none of it has been that arduous. I don’t think we’re living much differently; just more efficiently.


1The complementary gas usage graph shows lower gas for that year for the same reason; I’ll blog about gas when I have a complete set of readings for 2011).
2350 Watts divided by 1000, then multiplied by 8760 hours in a year.
Photo of the Current Cost monitor was by Tristan Fearne.
Thanks also to @andysc for helping create the graph from meter readings on irregular dates.

The post Halving our electricity usage appeared first on LauraCowen.co.uk.

UX hack at London Green Hackathon

At the London Green Hackathon a few weeks ago, the small team that had coalesced around our table (Alex, Alex, Andy, and me) had got to about 10pm on Saturday night without a good idea for a hack, in this case a piece of cool software relevant to the theme of sustainability. We were thinking about creating a UK version of the US-based Good Guide app using on their API to which we had access. The Good Guide rates products according to their social, environmental, and health impacts; the company makes this data available in an API, a format that programmers can use to write applications. Good Guide uses this API itself to produce a mobile app which consumers can use to scan barcodes of products to get information about them before purchase.

Discussing ideas

The problem is that the 60,000 products listed in the Good Guide are US brands. We guessed that some would be common to the UK though. We wondered if it would be possible to match the Good Guide list against the Amazon.co.uk product list so that we could look up the Good Guide information about those products at least. Unfortunately, when we (Andy) tried this, we discovered that Amazon uses non-standard product IDs in its site so it wasn’t possible to match the two product lists.

The equivalent of the Good Guide in the UK is The Good Shopping Guide, of which we had an old copy handy. The Good Shopping Guide is published each year as a paperback book which, while a nicely laid out read, isn’t that practical for carrying with you to refer to when shopping. We discovered that The Ethical Company (who produce the Good Shopping Guide) have also released an iPhone app of the book’s content but it hasn’t received especially good reviews; a viewing of the video tour of the app seems to reveal why.

Quite late at night

By this point it was getting on for midnight and the two coders in our team, Andy and Alex, had got distracted hacking a Kindle. Alex and I, therefore, decided to design the mobile app that we would’ve written had we (a) had access to the Good Shopping Guide API and (b) been able to write the code needed to develop the app.

While we didn’t have an actual software or hardware hack to present back at the end of the hackathon weekend, we were able to present our mockups which we called our ‘UX hack’ (a reference to the apparently poor user experience (UX) of the official Good Shopping Guide mobile app). Here are the mockups themselves, along with a summary of the various ideas our team had discussed throughout the first day of the hackathon:

The post UX hack at London Green Hackathon appeared first on LauraCowen.co.uk.