Website Logo. Upload to /source/logo.png ; disable in /source/_includes/logo.html

OpenSensors.IO

News and Updates about OpenSensors.IO

Tips for Installing a Community Air Quality Sensor Network

Small air pollution sensor technologies have enabled deployment of numerous sensors across a small geographic area that can supplement existing monitoring networks and significantly reduce the cost of longer-term community air pollution studies.This helps mitigate the risk of current approaches to monitoring air quality in a region that rely on only a dozen or so stations and may give you an average that is not be representative of what’s happening where you live.

What are you trying to do

Air quality is affected by many possible contaminants, in fact the Environmental Protection Agency (EPA) has identified six “criteria pollutants” as pollutants of concern because of their impacts on health and the environment . The criteria pollutants (http://www.epa.gov/airquality/urbanair/) are:

  1. ozone (O3) http://www.epa.gov/air/ozonepollution/
  2. particulate matter (PM) http://www.epa.gov/air/particlepollution/
  3. carbon monoxide (CO) http://www.epa.gov/airquality/carbonmonoxide/
  4. nitrogen dioxide (NO2) http://www.epa.gov/air/nitrogenoxides/
  5. sulfur dioxide (SO2) http://www.epa.gov/air/sulfurdioxide/
  6. lead (Pb). http://www.epa.gov/air/lead/

Under the Clean Air Act, the EPA has established primary and secondary National Ambient Air Quality Standards (NAAQS) for these six pollutants. As you begin, keep in mind what you want to measure and how that information will be used. Is there some final output or final report you’ve got to get to?

Understand your sensor choices for collecting air quality data

Commercially available sensors can measure the level of potential contaminants including O3, NO2, NO, SO2, CO, PM2.5 and lead. These devices should be designed to be easy to connect and provide quality data measurements so that non technical community groups can deploy them.

Here are some factors to consider in assessing options for sensors to collect air quality data * cost * operating lifetime * accuracy, precision,and bias of measurement * range of sensitivity * speed of response time * maintenance requirements * reliability

More information on what and how to measure see https://cfpub.epa.gov/si/si_public_file_download.cfm?p_download_id=519616

Beyond the sensors, you will need to make tradeoffs between cost and redundancy for the best network connectivity.

Point to point – lowest cost, greater number of coverage points, least redundancy for each individual point Mesh – higher cost, greater redundancy

Most community-based sensor networks are adopting point-to-point network connectivity because of the ease of connection and low-cost structure. Here is a guide that we already have around pros and cons around connectivity, use that to find the best connectivity network

Our Process

OpenSensors recommends a phased approach, from proof of concept to full-scale deployment, to ensure a successful installation of an IoT network in a business environment. Our aim is to reduce the time to go live and minimize risk.

Phase 1 Evaluate sensors:

Evaluate different sensors for quality, signal-to-noise ratio, power consumption and ease of setup by trying them out on a very small scale in a lab.

Phase 2 Proof of concept:

Do a full end-to-end test to verify that the queries and analytics were feasible. Connect 5 to 10 sensors to a cloud infrastructure.

Phase 3 Pilot phase:

Move out of the lab into your actual environment. Typically, this requires somewhere between 30 to 100 sensors. We suggest a one to two-month test to ensure that the sensors work at scale and the gateway can handle the load, similar to production usage.

In addition to testing the sensors in the wild, this is the time to think through your onboarding process for the devices. Questions like; who will install the sensors feeds into design decisions on the firmware of how much pre-configuration has to be done. We recommend a ‘just works’ approach and an assumption that all sensors will be installed by people who willnot configure firmware. If you need to deploy 200-300 sensors, the installation engineers need to be able to deploy a lot of sensors in a distributed physical environment over a short amount of time. It is much more efficient for your sensors to be pre configured. In these situations, we give usually give people a simple interface to enable them to add meta data such as location and elevation. Sensors should be labelled clearly and details pre-loaded on a cloud platform like OpenSensors before they are deployed so that adding meta data information is a matter of 1-2 steps.

Phase 4 Plan and implement full-scale deployment:

After the pilot phase, there should be enough data to verify network performance and your choices for sensors and connectivity, after which, full deployment can be planned in detail and implemented.

Want to create your own Air Quality project?

The EPA Smart City Air Challenge (https://www.challenge.gov/challenge/smart-city-air-challenge/) is now live. The challenge is trying to help communities figure out how to manage installations of 250 to 500 sensors and make the data public. OpenSensors.io is free to use for community projects working on IoT Open Data projects and will be supporting the EPA’s iniative.

Contact us if you would like assistance on sensor selection, network design, or planning a proof of concept deployment.

Path to Smart Buildings

Whether you are a building manager planning efficient space usage or an architect looking to design state-of-the-art buildings, we have broken down the steps to get you to your desired end goal. IoT planning should start with the business needs, of course, and quickly moves from the component layer all the way up to the application layer. We need to figure out what core data should be gathered and ways to effectively leverage that data. These IoT solutions require an end-to-end or device-to-cloud view.

A Phased implementation approach works best.

We have found that the most successful IoT projects follow a phased implementation approach: Design Phase, Proof of Concept, Pilot, and Deployment. The design phase asks questions such as which sensors, who will be installing and maintaining the sensors. For Proof of Concept, a lab evaluation should include hooking up 5-8 sensors all the way through a gateway to data collection in the cloud. This will give enough real data to verify that the queries and the analytics are feasible. The Pilot Phase ensures that the sensors work at scale and that the gateway configuration has been made easy for the deployment specialists. A pilot phase should be about 40 sensors depending on the density of the sensors. At this point, you can scale up to the number of sensors and the bandwidth required for full deployment.

OpenSensors’ Deployments

We have built hardware, installation and network provider partnerships and relationships to help customers get rollouts live efficiently. Either roll out your own network or we will put you in touch with your local sensor installation specialist to take care of the install and maintenance. We are working with customers and the community to understand what is required at each level for your IoT solution and can ease development and integration issues.

Lessons Learned From First Generation IoT Installations

At first glance,Wi-Fi-based sensors seems like a good choice for a non consumer facing sensor network, however, we have discovered that Wi-Fi has some significant drawbacks.

Access

One of the biggest drawbacks to Wi-Fi enabled sensors in a corporate environment at many of the companies is gaining access. Corporate IT often has valid security concerns of hundreds if not thousands of sensors joining the network and have deployed corporate firewalls that block any access. Often this means that we are not allowed to spin up our own Wi-Fi network in order to have a gateway for a customer’s IoT sensor network. If IT has already deployed a Wi-Fi network they are rarely willing to provide the passwords to allow the IoT network devices and gateways to take advantage of it. Relying on corporate Wi-Fi can make on-site installations and maintenance extremely complex and painful. The whole project becomes dependent on the goodwill of a network administrator for access every time maintenance needs to be performed.

Power

Wi-Fi has good transmission range but that comes with a cost of high power usage. With a short battery life, maintenance costs for Wi-Fi sensors are higher than low-power alternatives. One wireless protocol that is we see in many successful deployments is LoRa because it offers long transmission range at a much lower battery usage than Wi-Fi.

Moving to LoRa and other long range protocols

If you follow our blog and publications, you will notice we have been talking a lot about network technologies, this isn’t a coincidence. We have spent a long time evaluating and piloting these stacks with our community.

Network access and battery constraints are driving the move to long range networks and off WiFi for many IoT installations. LoRa is working well for us so far for a number of use cases most of our customer spin up a private network. The ecosystem of providers is maturing and we are finding a lot of companies who are adopting existing sensors for their networks Gateway providers such as Multi Tech provide good support for the long tail of small scale (> 250 sensor installs) hardware providers to thrive.

LoRa is a wireless protocol that promises between two and five kilometers transmission range between sensors and gateway, if you haven’t already done so please read our introduction to what it is. With a separate LoRa network, facilities and/or operations can install and manage the whole operation without the access and security issues of using the corporate Wi-Fi network. A typical network will have hundreds of sensor devices sending messages to a gateway. The LoRa gateway is a self contained system, we can have the LoRa network sit completely outside of the corporate firewall (GSM) and minimize IT security concerns.

One LoRA gateway can normally cover an entire real estate. This can significantly reduce infrastructure, deployment, and administration costs compared to other shorter range wireless options like Zigbee or Bluetooth that requires complex installs. Our aim is to have a system that non technical engineers can roll out and support, more on how to do this on later blog posts, but in most cases the OpenSensors team is the equivalent of ‘2nd line support’ to the onsite team who have intergrated our apis to their helpdesk ticketing systems etc.

LoRa networks can be public or private. An example of a public network is The Things Network, we continue to work with and support that community. Most current commercial projects are running private networks at this time but will be interesting to see how that evolves over time.

To conclude, LoRa is working well for us at the moment but we will keep researching other networks to enable us to understand the pros and cons of all the network providers. Sigfox is a very interesting offering that we will properly test over the next few months, for example.

Savvy Building Managers Use Sensors to Reduce Operating Expenses

Sensor networks are emerging as a mission critical method for offices and commercial spaces to save money. Offices and commercial spaces are undergoing a smart transformation by connecting and linking HVAC, lighting, environmental sensors, security, and safety equipment. Building and facilities managers are also installing utilization sensors to manage their spaces more efficiently.

Main benefits of data driven buildings * Operational efficiency * Use data for better design * Better workspace experience for employees

Changing workforce

Recently we helped a company design a prototype of a desk sensor monitoring system. Because so many of their people were working from home they wanted to accurately measure the peak demand during the day to see if they could save 10-20% of their desk space. Goals for the system were: * Monitor desk occupancy anonymously. * Minimize installation and deployment costs: rely on solutions that were simple enough that existing non-expert personnel could be trained to deploy. * Minimize day-to-day maintenance and deployment: this drove strategies for long battery life among others. * Design a deployment process that ensured install team could easily add sensor location metadata to allow for rich reporting and analysis once IoT sensor network was operational. * Limit the IT resources needed for deployment

The phased approach works best

First, we looked at many sensors, evaluating quality, signal-to-noise ratio and power consumption. It’s always a good idea to get a handful of different types of sensors and try them out in a very small scale. We chose an infrared red sensor with good battery life-time and a single LoRa gateway that could support all the floors and provide connection to the cloud.

Next we did a full end-to-end test, where we hooked up 5-10 sensors up completely to a cloud infrastructure all the way through the connectivity gateway. Now we had real data flowing into the infrastructure and could verify that the queries and analytics were feasible. This step just makes sure everything works as planned and you will get all the data that you will need.

Once you’re happy with the proof of concept phase, it is time for the real pilot phase. Instead of having just a handful of working sensors, now you’ll hook up an entire floor or a street or whatever your use case might be. It should be somewhere between thirty, forty, maybe up to a hundred sensors. At this point you can ensure that the sensors work at scale and the gateway can handle the load. Typically we see customers running these for a month or two to get a good feel for how the sensors will perform in a production situation.

After the pilot phase, you should have enough data to verify network performance and your choices for sensors and gateways. Now you can plan the full deployment in detail. It’s been our experience, based on a number of customer installations, that the most successful IoT networks follow these steps in a phased implementation approach.

The technology at the silicon, software, and system level continues to evolve rapidly and our aim is to reduce the time to go live and minimise risk. The internet of things is a nebulous term that includes quite a lot of specialised skillsets such as sensor manufacturing, network design, data analysis, etc.

In order to make projects successful, we have taken the approach of building many hardware, installation and network provider partnerships, and relationships to help customers succeed as opposed to trying to do it all ourselves. We have been working with customers to develop methods to lower the sensor density and in turn lower the cost of projects whilst still getting comparable accuracy.

Contact us if you would like assistance on sensor selection, network design, or planning a proof of concept deployment.

Getting to Grips With IoT Network Technologies

How sensors communicate with the internet is a fundamental consideration when conceiving of a connected project. There are many different ways to connect your sensors to the web, but how to know which are best for your project?

Having just spent the better part of a week researching these new network technologies, this brief guide outlines the key aspects to focus on for an optimal IoT deployment:

Advanced radio technology

  • Deep indoor performance – networks utilising sub-GHz ISM (industrial-scientific-medical) frequency bands such as LoRaWAN, NWave and Sigfox are able to penetrate the core of large structures and even subsurface deployments.
  • Location aware networking – a variety of networks are able to track remote sensors even without the use of embedded GPS modules. Supporting sensors moving between hubs – with advanced handoff procedures and innovative network topologies mobile sensors can move around an area and remain in contact with core infrastructure without disrupting data transmission. Intelligent node handoff is also crucial for reducing packet loss, if the connection to one hub is hampered by passing through particularly chatty radiowaves, the node can switch to a better placed hub to relay it’s crucial payload.
  • Interference resistance – the capability of a network to cleave through radio traffic and interference that would ordinarily risk data loss.

Low energy profiling

  • Device modes – LoRaWAN is a great case and point with three classes of edge node: the first, Class A, allows a brief downlink window after each uplink upload i.e after having sent a message, the sensor listens in for new instructions; a Class A node appoints a scheduled downlink slot, the device checks in at a certain point; and the last, Class C type nodes, listen for downlink messages from LoRaWAN hubs at all times. The latter burns considerably more power.
  • Asynchronous communication – this enables sensors to communicate data in dribs and drabs where possible, services do not need to wait for eachother thereby reducing power consumption.
  • Adaptive data rates (ADR) – depending on the quality of signal and attenuation, modern networks are able to dynamically allocate data rate depending on interference, distance to hub etc. This delivers real scalability benefits, frees up space on the radio spectrum (spectrum optimisation) and improves overall network reliability.

security

  • Authentication – maintains data integrity by ensuring the sensor which is publishing that mission critical data really is that sensor and not an impostor node. Ensures information privacy.
  • End to end encryption (E2E) – prevents tampering and maintains system integrity.
  • Integrated security – good network security avoids potential breaches and doesn’t place the onus on costly, heavily encrypted message payloads.
  • Secure management of security keys – either written remotely on the initial install or embedded at manufacture, security keys are fundamental to system security. ZigBee’s recent security issue shows how not to manage security keys, by sending them unencrypted over-the-air to devices on an initial install.
  • Receipt acknowledgement – ensures mission critical data is confirmed received by network or device.

Advanced network design

  • Full bidirectional comms – enables over the air (OTA) updates, enabling operators to push new firmware or system updates to thousands of remotely deployed sparse sensors at the push of a button. This is critical to a dynamic and responsive network. As with device modes mentioned previously, bidirectionality allows deployed devices to function as actuators and take action (close a gate, set off a fire alarm etc) rather than just one-way sensors publishing to a server.
  • Embedded scalability and consistent QoS – as load increases on a network so too does the capacity of the network. This takes the form of adaptive data rates, prevention of packet loss by interference and channel-blocking, the ability to deploy over-the-air updates and ensuring the capability to add nodes, hubs and maintain existing assets without impacting on overall network service, perhaps through automatic adaptation.

There are also a number of legal, cost, market and power focused aspects worth considering that I shall not cover here. But, critically, it’s worth mentioning that the majority of these technologies operate on ISM (industrial – scientific – medical) frequency bands, and as a result are unlicensed. These bands are regulated and there are rules, however anyone operating on these bands can do so without purchasing a license. Notably, you don’t have sole ownership of a slice of the spectrum, you don’t get exclusive access. Therefore, with a variety of other vendors blasting away across the radio waves, these technologies encounter significantly more interference than the licensed spectrum. However, the new networks, LoRa, Sigfox, NWave etc are based on protocols and technologies designed to better sort through this noisy environment, grab a channel and send a message.

Understanding that the airwaves are a chaotic mess underlines the importance placed on features such as adaptive data rates, node handoff and power saving methods such as asynchronous communication. Wired networks do not have to consider such things. But for most it’s not just a case of who shouts loudest wins. The majority of wireless protocols ‘play nice’ opting for a polite “listen, then talk” approach, waiting for a free slot in the airwaves before sending their message.

Some protocols such as Sigfox forego such niceties and adopt a shout loud, shout longer approach, broadcasting without listening. A typical LoRaWAN payload takes a fraction of a second to transmit, Sigfox by comparison sends messages 3-4 seconds in length. However, if you just broadcast without listening, Sigfox must therefore operate with severe cycle duty limitations, which translate into a limited number of messages sent per device per day and severe data rate limitations.

These choices also translate into varying costs, and critically, into battery life limitations and gains, the crux of any remote deployment.

See this link for a matrix of the major technologies currently vying for network domination.

What Is LoRaWAN?

What is LoRaWAN and why is it “better” than Zigbee?

Even long-time IoT enthusiasts struggle with the wealth of technologies that are on offer these days. One of the most confusing phenomena for someone who isn’t a RF engineer is the scale and range of LoRaWAN. If you’ve been in the game for a while, you may have used a ZigBee radio module for wireless data transmission in your own projects. ZigBee-compliant modules had become a gold standard for many industrial applications in the 2000s, featuring >10m range (it was said to be 100m, but that was hardly ever achieved), up to hundreds of kbit/second transfer rate (depending on the model and radio band used) and message encryption by default. Over most cheap proprietary RFM22 transceivers, ZigBee also offered an industry standard following the IEEE 802.15.4 specification for mesh networking. This allowed ZigBee devices to forward messages from one to another, extending the effective range of the network. Despite their rich features, ZigBee devices are limited in range and limiting when it comes to their power consumption and the potential use in IoT application. And this is where LoRaWAN comes into play: It’s a Low-Power Wide Area Network (LPWAN) standard promising a reach of tens of kilometres for line-of-sight connections and aiming to provide battery lives of up to ten years. How can this work?

First, let’s contrast short-range radio standards like the ZigBee with the LPWAN standards like LoRaWAN. RFM22, ZigBee and LPWAN all use radio frequencies in the ultra high frequency (UHF) range. Following the ITU 9 classification, these are devices that use a carrier frequency of 300 MHz to 3 GHz. That is, the radio waves have a peak-to-peak distance of 10-100 cm — a tiny proportion of the electromagnetic spectrum. Here, we find television broadcasts, mobile phone communication, 2.4 GHz WiFi, Bluetooth, and various proprietary radio standards. We all know that television broadcasting transmitters have a significant range, but clearly that’s because they can pack some punch behind the signal. There must be another reason that LoRaWAN does better than the other radio standards. The carrier frequency itself can therefore not explain the range of LPWAN standards.

There is all sorts of hardware trickery that can be applied to radio signals. Rather than allowing those electromagnetic waves orientate randomly on their way to the receiver, various polarisation strategies can increase range. A circular-polarised wave that drills itself forward can often more easily penetrate obstacles, whereas linear-polarised signals stay in one plane when progressing towards the receiver, concentrating the signal rather than dispersing it in different directions of the beam. However, these methods require effort and preparation on both the sender and receiver side, and wouldn’t really lend themselves to IoT field deployment…

The secret sauce of LPWAN is the modulation of the signal. Modulation describes how information is encoded in a signal. From radio broadcasting stations you may remember ‘AM’ or ‘FM’, amplitude or frequency modulation. That’s how the carrier signal is changed in order to express certain sounds. AM/FM are analog modulation techniques and digital modulation interprets changes like phase shifts in the signal as binary toggle. LPWAN standards are using a third set of methods, spectrum modulation, all of which get away with very low, noisy input signals. So as the key function of LPWAN chipsets is the demodulation and interpretation of very faint signals, one could think of a LoRaWAN radio as a pimped ZigBee module. That’s crazy, isn’t it? To understand a little more in detail how one of the LPWAN standards works, in the following we are going to focus on LoRaWAN as it is really ‘the network of the people’ and because The Things Network -a world-wide movement of idealists who install and run LoRaWAN gateways- supports our idea of open data.

LoRaWAN uses a modulation method called Chirp Spread Spectrum (CSS). Spread spectrum methods contrast narrow band radio as ‘they do not put all of their eggs into the same basket’. Consider a radio station that transmits its frequency-modulated programme with high power at one particular frequency, e.g. 89.9 MHz (the carrier is 89.9 MHz with modulations of about 50 kHz to encode the music). If you get to receive that signal, that’s good, but if there is a concurrent station sending their programme over the same frequency, your favourite station may get jammed. With spread spectrum, the message gets sent over a wide frequency range, but even if that signal is just above background noise, it is difficult to deliberately or accidentally destroy the message in its entirety. The ‘chirp’ refers to a particular technique that continuously increases or decreases the frequency while a particular payload is being sent.

The enormous sensitivity and therefore reach of LoRaWAN end devices and gateways has a price: throughput. While the effective range of LoRaWAN is significantly higher than ZigBee, the transmitted data rate of 0.25 to 12.5 kbit/s (depending on the local frequency standard and so-called spreading factor) is a minute fraction of it – but, hey, your connected dishwasher doesn’t have to watch Netflix, and a payload of 11-242 bytes (again, depending on your local frequency standard etc) is ample for occasional status updates. Here is where the so-called spreading factor comes into play. If your signal-to-noise ratio is great (close proximity, no concurrent signals, etc), you can send your ‘chirp’ within a small frequency range. If you need to compensate for a bad signal-to-noise ratio, it’s better to stretch that ‘chirp’ over a larger range of frequencies. However, that requires smaller payloads per ‘chirp’ and a drop in data rate.

Power consumption, reach and throughput are all linked. To burst out a narrow transmission consumes more power than to emit a spread signal. Hence, LoRaWAN implements an adaptable data rate that can take into account the signal-to-noise ratio as well as the power status of a device.

Taking the Air at the Turk’s Head

Summary

Opensensors.io are pioneers in open data and the internet of things, surfacing a wide range of data sets for open analysis. As an open data aggregator we deliver content over a common infrastructure; whether air quality or transport data, you only have to think about one integration point. Future cities need low data transaction costs for friction free operation, bridging technical gaps slow progress, so keeping the number of integration points low makes sense everybody.

Our journey starts here, as we build out our opendata content expect to see more stories, more insight and hopefully some catalysts for positive change.

Before our first story, consider what will make open data and the Internet of things useful.

We must bridge the gap from data to information, allow consumers to abstract away the complexity of IoT to ask questions that makes sense to them.

Take data from the London Air Quality network (LAQN), the network is sparse so it’s improbable our need maps directly to a sensor. By coupling some simple python code with opensensors data we’ll mash some LAQN data together to get some insight about air quality in wapping.

In this story i’ll show how we can bridge the information gap with some simple code, yielding valuable insight along the way!

Chapter 1: Opensensors.io Primer

First a quick primer on how data is structured in opensensors.io (for more detail check out our forum and glossary of terms)

  • Devices – Each connected ‘thing’ maps to a secured device, things map one-to-one to a device
  • Topics – Data is published by devices to topics, a topic is a URI and is the pointer to a stream of data
  • Organisations (orgs) – An organisation owns many topics and is the route of an orgs topic URI
  • Payloads – Payloads are the string content of messages sent to topic URI’s, typically JSON

Also check out our RESTful and streaming APIs on the website for more background and online examples.

Chapter 2: Putting JSON to Work

You can use the opensensors REST API to gather data for research, but it comes in chunks of JSON which isn’t great for data science. For convenience i wrapped up some common data sources for London into a python class. Since IoT data is rarely in a nice columnar form it’s valuable to build some simple functions to shape the data into something a bit more useful.

Chapter 3: Introducing the Turks Head

I’m fortunate to spend a lot of time in Wapping, in and around the community of the Turk’s Head Workspace and Cafe, but unfortunately we don’t have a local LAQN sensor. With a bit of data science and opensensors.io open data we can estimate what NO2 levels might be around the cafe and workspace.

A simple way to estimate NO2 is a weighted average of all the LAQN sensors, in this case we derive the weights from the distance between the sensor and our location. Since we want to overweight the closest sensors we can use an exponential decay to deflate towards zero for those far away.

For the Turks Head sensors in Aldgate, Southwark and Tower hamlets and the City are the closest and have the biggest impact on our estimate.

Chapter 4: Getting into the Data

With our air quality time series, and our weights we can dig into what our estimates for the Turks Head look like (NO2 * weight). Here’s the series for NO2 over the last 20 days, it looks like the peaks and troughs repeat, and the falling or rising trend is persistent in between.

Trend followers in finance use moving averages to identify trends, for example the MACD indicator (moving average convergence divergence). MACD uses the delta between a fast and slow moving average to identify rising or falling trends, we’ll do the same. For our purposes we’ll speed the averages up using a decay of 3 and 6 periods (LAQN data is hourly and we are resampling to give estimates on the hour).

What can we conclude from the charts for The Turks Head? From the left hand chart we can see the data is little noisy,with a flat line showing some missing or ‘stalled’ data. Looking at the 3 and 6 period decayed averages the data is smoother, with the faster average persistently trending ahead of the slower one.

Even with fast moving decays the averages cross only a couple of times a day, showing persistence when in trend. So using a simple trend indicator and the LAQN we can build a simple air barometer for the Turks Head.

Good 3 period exp average < 6 period average (green) Bad 3 period exp average > 6 period average (red)

This is helpful because, given a persistent trend state, where we have a ‘good’ air now, we’ll probably have ‘good’ air for the following hour.

Chapter 5: What’s the trend across London?

So we now have means of defining how NO2 levels at the Turk’s Head are trending, but is the trend state predictable over a 24 hour period?

Remember we define good or bad air quality trend as:

Good ‘fast’ average < ‘slow’ average = falling NO2 Bad ‘fast’ average > ‘slow’ average = rising NO2

If we aggregate data into hourly buckets we can visualise how much of the time, over the past 20 days, a sensor has been in a up trend (‘good’) for a given hour.

x = hour of the day y = percentage of bucket that is in a ‘good’ state

We can see that for each 1 hour bucket (24 in total) there is a city wide pattern; if we aggregate across the city (using the same measure, the percentage of sensors in up or down trend) we get an idea of how NO2 trends over a typical day.

Our right hand chart shows the percentage of ‘good’ versus ‘bad’ NO2 sensor states across London over the past 20 days (collected from about 80 sensors over 20 days)

Now this is a really simple analysis but it suggests the proportion of ‘good’ trends across London is high before 7am, and then falls away dramatically during the morning commute. No surprises there.

But the pattern isn’t symmetrical; after peaking around lunchtime, when only ~20% of the cities sensors having improving NO2, NO2 falls throughout the afternoon. From a behavioural standpoint this makes sense; there is a more concentrated morning commute relative to the evening. Most of us arrive at the workplace between say 8 and 9am, but in the evening we may go to the gym, we may go out for dinner, or just work late. The dispersion of our exits from the city is wider than when we enter.

Chapter 6 – PM versus NO2

So we have considered NO2 as our core measure, in part because there are more sensors in the LAQN delivering this data than particulates. But let’s consider particulates for a moment, LAQN deliver PM10 and PM2.5 measures, the definition can be found here.

Our temporal curves for particles differ from NO2 taking longer to disperse during the evening rush hour (remember we are measuring percentage of sensors in a ‘good’ state). As a measure of air quality NO2 builds up faster, and decays faster once peak traffic flows have completed, whereas particles linger only fading deep into the night (on average).

Closing Thoughts

In our data set, NO2 and PM measures differ in their average behaviour over a typical 24 hour period.

  • Behavioural interventions will need to consider whether particulates or N02 are the most impactful.

  • How can we communicate air quality to our citizens, and relate their personal needs to the measures most impactful on their lives?

  • Do we need additional sensors to create a more dense air quality resource? How can we allocate funds to optimally support network expansion and air quality services?

  • Knowing the characteristics of a sensor (location, calibration, situation [elevated, kerb side, A or B road]) will improve estimates, how can we deliver this meta-data?

Plenty of food for thought…………..information

Notes and Resources

Our stories are quick and dirty demonstrators to promote innovation and should be treated as such. All data science and statistics should be used responsibly :)

All of the code supporting this can be found on github with data sourced from opensensors’ LAQN feed, and i use a postcode lookup to get long/lat locations for wapping. I’ve also taken some inspiration from https://github.com/e-dard/boris and https://github.com/tzano/OpenSensors.io-Py so thanks for their contribution!

http://www.londonair.org.uk/ https://www.opensensors.io/

The Path to Smart Buildings

Google ‘principles of good architectural design’ and you’ll get links to technology, to buildings and all manner of other services. But it’s hard to find principles of design for the tech services that facilitate smart buildings. Let’s remind ourselves what a smart building is with the help of sustainable tech forum; ‘The simple answer is that there’s automation involved somehow that makes managing and operating buildings more efficient’. So the need is well documented but we want to bridge to the ‘practice of designing and constructing buildings’, after all that’s what architecture is about.

Opensensors hosted its first Smart Building Exchange (SBeX) event in September, and we are grateful to the panelists and attendees who made it such a success. Our goal was to bridge the gap between widely documented features of smart buildings and the tech that underpins it. Through our workshops we decomposed tenant needs and identified services to support them using the value proposition canvas. We borrow from lean product design principles since building operators need to rapidly innovate using processes inherited from startups. Mapping the pains and gains of users to the features and products of the tech stack revealed a common theme, data infrastructure. Data is the new commodity that new services will be built upon, some will be open and others private, but data will be the currency of the next generation smart building.

Take integrated facilities management (IFM) where data serves the desire to deliver better UX at a lower cost with fewer outages. IFM has pivoted from a set of siloed software services to a set of application services overlaid upon a horizontal data infrastructure. For example:

  • Data science services will develop to identify ‘rogue’ devices operating outside expected patterns, they will identify assets that need inspection or replacement and schedule maintenance works using time and cost optimisation routines.
  • Digital concierge services will use personal devices, location based technology and corporate data (calendar and HR data) to optimise both user experience and spacial allocation.

So can we identify a tech architecture to support this pivot from monolithic apps? Data services facilitated by a central messaging backbone allows the complexity of building services to be broken down and tackled one service at a time, lowering the risk failure and allowing agile iterations at a reduced cost. Take the pillars of data driven applications for IFM as identified by our workshop group; predictive/reactive alerting and tactical/strategic reporting, how might we go about servicing these needs? Consider how the path to smart buildings outlined below could help build an IFM product.

  • Build the value proposition founded on a clear vision of what your users want.
  • Identify the data that will drive your smart building product including open data
  • Identify the sensors needed to gather your data, they could be mobile devices or occupancy sensors
  • Identify connectivity from the sensor to your data infrastructure, this might be radio to IP connected gateways or directly onto the local network via POE (power over ethernet)
  • Structure your message payloads and commit to schemas to deliver repeatable processes for message parsing and routing within your building
  • Configure your events turning your data into information using rules based platforms for IoT such as node red
  • Build widgets and data services that can be bound together for dashboarding. By identifying common user needs across the enterprise we can operate a leaner system stack
  • Build user portals and dashboards using your common data services and components
  • Validate tenant user experience through surveys and modelling tenant behaviour using occupancy devices
  • Iterate to improve using data gathered throughout the building to deliver better products and experiences

Opensensors has firmly backed Open Source and Open Data as the best way to yield value from the Internet of Things choosing to collaborate with the tech community to enable facilities managers to build higher order systems focused on their domain expertise. Please contact commercialteam@opensensors.io should you have a need for a smart building workshop or are ready to build your next generation smart building product.

Don’t Make Me Think

Expect the early adopters of ‘enchanted’ buildings to be our employers, the world green building council estimate we spend 10% of our costs on facilities management, 90% is the expense of executing our business. You don’t have to be an accountant to realise a 1% improvement in productivity trumps a 1% saving facilities costs by 9 to 1! So how might smart buildings deliver productivity and improved user experience (UX)?

Great UX should be pain free,”Don’t make me think” (Steve Krug). Whilst smart phones offer a means of logging in to a workplace, it’s a bind to install the app, to login, to connect and privacy and indoor location services are a challenge. IoT tech such as OpenSensors, beacons, noise and air quality sensors, coupled with responsible anonymisation can deliver on productivity because improved building and personal wellness simply means we get more done. But how might this work?

Aarron Walter said “Designers shooting for usable is like a chef shooting for edible.”, as techies we can apply these ideas to civic interactions. Take a large office space, I arrive from out of town, I’m visiting for a meeting with my project team. I register, head off to the flexible space and grab a desk, perhaps wasting time trying to find my guys. Each of the team then arrive, some may co-locate, others disperse, there’s no convenient breakout space; the collaboration is diluted and we’re disturbing others. We ate but it wasn’t a great meal.

The lack of an inexpensive, robust, secure and open tech stack rendered us powerless, we have been consuming ‘edible’ tenant experiences rather than a delightful meal. But tech is moving fast; expect new digital services enabled by advances in IoT hardware and data software to shake down the industry. Organisations ready to invest and experiment will move ahead, they’ll develop an ‘edge’ that will define their services and branding for years to come.

Digital concierge – expect to sign in digitally on a device that will bind you, your calendar, your co-workers and your building. Through data expect intelligent routing to the best work space for your or your groups needs.

Location based services – sensors enable ‘just in time’ cleaning services that clear flexible working space when meetings conclude, or sweep loitering coffee cups and deliver fresh coffee during breaks in longer workshops.

Environmental factors – expect IoT to bubble up environmental data such as air quality, temperature, humidity, light and noise that can be used to adjust HVAC systems in real time, or to aide interior designers in improving the workplace.

Smart facilities management – location based services coupled with smart energy grid technology will allow fine tuning of energy supply reacting to changes in demand and national grid status (smart grid frequency response).

Data science – each of the above services a specific need whilst wrangling data sets into an ordered store. Technology like opensensors can then add further value through real time dashboarding for health and safety or real time productivity management. Furthermore, once data is captured we can apply machine learning to deeper understand the interactions of our human resources and physical assets through A/B testing or other data science.

Unlocking great UX in buildings boils down to data; capturing it, wrangling it, applying science and iterating to make things better. First we must gather the data from the systems in place (see First ‘Things’ First) whilst supplementing from new devices such as air quality, occupancy through sensors or beacons. Having provided a robust data fabric tenants need to become active rather than passive, agile rather than rigid in their approach to managing their assets. IoT devices and data services will deliver an edge for delivering the best of breed user experience that tenants value so highly.

Monitoring for Earthquakes With Node-red

OpenSensors now capture seismic data from the Euro-Med Seismic Centre (EMSC) and the United States Geological Survey (USGS). Every ten minutes we are polling the latest information of major and minor earthquakes around the globe and make this information available via our programming interface (API) or as MQTT feed. In this short tutorial, we’re showing you how to use OpenSensors together with Node-RED to receive email alerts whenever there’s a major incident in a region of interest. You can use this guide as starting point for further experiments with Node-RED and develop your own earthquake-triggered workflows. Let’s shake it.

On OpenSensors

  • First, you need to login to your account on OpenSensor or sign up for one if you haven’t done so already at https://opensensors.io.

  • Next, it’s good practice to have a new ‘device’ for this application, i.e. a dedicated set of credentials you’re going to use to log in to OpenSensors for this particular set of MQTT feeds.

    • In the panel on the left, click My Devices in the Devices menu.
    • Click the yellow Create New Device button at the top of the page.
    • Optional: Add some optional descriptions and press the disk icon to save your new device.
    • Take a note of your ‘Client id’ and ‘Password’ as you’re going to need them in your Node-RED workflow.

For Node-RED

Install node.js and Node-RED on your system. There’s a very good guide for this on the Node-RED website. Follow the instructions, including the separate section on Running Node-RED.

Once you’re ready, open a web browser and direct it to localhost:1880, the default address and port of the Node-RED GUI on your system.

(A very basic description of the Node-RED vocabulary can also be found at SlideShare.)

Developing a workflow

  • From the input panel of your nodes library on the left side, drag and drop a pink mqtt input node into the work area named Sheet 1.

  • Double-click the mqtt node. A window with configuration details opens.

    • Click the pen symbol next to ‘Add new mqtt-broker…’. Your Broker is opensensors.io, your Client ID and Password those you generated in the previous step on the OpenSensors website, and User is your OpenSensor user name.

  • Once the Broker is defined, enter /orgs/EMSC/+ into the Topic field. This is going to instruct Node-RED to subscribe to all MQTT topics generated by the EMSC.
  • Optional: Set the Name of this node to ‘EMSC’.

  • Drag and drop a second mqtt input node. When you double-click the node, you will realise that the Broker settings default to the ones you previously entered.

    • Enter /orgs/USGS/+ in the Topics field and ‘USGS’ as optional Name.
  • Drag and drop a dark green debug node from the output panel on the left. While debugging has the connotation of fixing a problem, in Node-RED it’s the default way of directly communicating messages to the user.

  • Draw connection lines (“pipes”) from both mqtt nodes to the debug node.

  • Press the red Deploy button in the upper right corner. This starts your Node-RED flow. If everything worked, you should see ‘connected’ underneath the mqtt nodes and your debug panel (on the right) should soon produce the following JSON-formatted output if there’s an event (which may take a while!):

While it is pleasing to be informed about every time the earth shakes, it soon becomes tedious staring at the debug panel in expectation of an earthquake. Also, you may not be interested in events in remote areas of the world, or exactly in those – whatever interests you.

We are going to extend our flow with some decision making:

First, we need to parse the information from the EMSC and USGS. For this example, we’re going to be particularly interested in the fields region and magnitude. There are plenty more fields in their records, and you may want to adjust this flow to your needs.

  • Drag and drop a pale orange function node from the functions panel into your flow. Connect both mqtt nodes to the input side (the left side) of your function node. Function nodes allow you directly interact with your data using JavaScript.

  • Enter the following code (or download the OpenSensors workflow).

Here be a JavaScript course… :–) In a nutshell, this code takes data from the ‘payload’ of the incoming message (read up on the topic and payload concept of Node-RED in the SlideShare article suggested earlier). The payload is then parsed for the region and magnitude fields using standard regular expressions. If we can successfully extract information (in this case: the region containing ‘ia’ somewhere in it’s name), we’re going to set the outgoing message’s payload to the magnitude, its topic to ‘EVENT in ‘ plus the name of the region and pass it on (‘return msg’) to the next node.

  • Drag and drop a lime green switch node from the function panel into your workflow. Connect the output of the function node to the input of the switch node. Configure (by double-clicking) the switch node to assert if the payload (being the magnitude of the earthquake) is greater than 2. Only then the message is going to be passed on.

  • Last, we’re going to drag and drop a light green e-mail output node from the social panel and configure it like an e-mail client, but with a default recipient: here in this case, ohmygodithappend@gmail.com.

  • Connect the output of the switch node to our debug node, as well as to the outgoing e-mail node.

  • We can then deploy the new workflow and should see something like this after a while:

In this case, an event was detected ‘off the coast of Northern California’ with a magnitude of 4.4 and at the same time, you should receive an e-mail with the region as subject and the magnitude in the body of the e-mail.

We hope that this flow is getting you started! Remember that Node-RED is superbly suited to interact with hardware… …imagine LEDs and buzzers indicating an earthquake.

The flow JSON: [{“id”:“e9024ae0.16fdb8”,“type”:“mqtt-broker”,“broker”:“opensensors.io”,“port”:“1883”,“clientid”:“1646”},{“id”:“2952b879.d6ad48”,“type”:“mqtt in”,“name”:“EMSC”,“topic”:“/orgs/EMSC/+”,“broker”:“e9024ae0.16fdb8”,“x”:127,“y”:104,“z”:“82a1c632.7d5e38”,“wires”:[[“490a140f.b6f5ec”,“163677af.e9c988”]]},{“id”:“54239d6.fabdc64”,“type”:“mqtt in”,“name”:“USGS”,“topic”:“/orgs/USGS/+”,“broker”:“e9024ae0.16fdb8”,“x”:128,“y”:159,“z”:“82a1c632.7d5e38”,“wires”:[[“490a140f.b6f5ec”,“163677af.e9c988”]]},{“id”:“490a140f.b6f5ec”,“type”:“debug”,“name”:“”,“active”:true,“console”:“false”,“complete”:“false”,“x”:538,“y”:86,“z”:“82a1c632.7d5e38”,“wires”:[]},{“id”:“163677af.e9c988”,“type”:“function”,“name”:“parse”,“func”:“// uppercase the payload (different centres report in mixed formats)\nmsg.payload = msg.payload.toUpperCase();\n\n// extracting interesting fields with regular expressions,\n// instead of using JSON.parse which fails with null fields\nvar places_with_ia_regex = new RegExp("REGION\”:\“(.IA.)\”,\“UPDATED\”);\nvar result1 = places_with_ia_regex.exec(msg.payload);\n\nvar magnitude_regex = new RegExp(\“MAGNITUDE\”:([0-9].[0-9]+)\“);\nvar result2 = magnitude_regex.exec(msg.payload);\n\n// if successful, sets topic to the region and payload to the magnitude\nif (result1 && result2) {\n msg.topic = ‘EVENT in ’+result1[1];\n msg.payload = result2[1];\n return msg;\n}”,“outputs”:1,“noerr”:0,“x”:296,“y”:251,“z”:“82a1c632.7d5e38”,“wires”:[[“64f4f2ea.9b0b0c”]]},{“id”:“64f4f2ea.9b0b0c”,“type”:“switch”,“name”:“at least magnitude 2”,“property”:“payload”,“rules”:[{“t”:“gte”,“v”:“2”}],“checkall”:“true”,“outputs”:1,“x”:428,“y”:179,“z”:“82a1c632.7d5e38”,“wires”:[[“490a140f.b6f5ec”,“f7bcc59c.084338”]]},{“id”:“f7bcc59c.084338”,“type”:“e-mail”,“server”:“smtp.gmail.com”,“port”:“465”,“name”:“ohmygodithappened@gmail.com”,“dname”:“”,“x”:581,“y”:256,“z”:“82a1c632.7d5e38”,“wires”:[]}]