Defining major Internet of Things concepts

The term ‘Internet of Things’ (IoT) is itself ubiquitous to the point of being hard to define, and accordingly, the countless subsets of the term are likewise often ambiguous and used differently from person to person. This article by IoT Insider Editor Sam Holland offers some definitions on Internet of Things jargon.

The IoT is often used as a general term to combine the definitions of other nebulous words like ‘data-driven’ and ‘connected’, but the problem with these terms is that many technologies, such as smart home devices, are data-driven and connected but may not provide a clear example of the IoT itself.

Put simply, an often overlooked aspect of the IoT is its use of AI – artificial intelligence – at the edge and/or cloud (both of these terms will be covered later), and its ability to inform the decision making of both machines and humans themselves (consider the importance of the IoT in human-machine interfaces, which will also be discussed later).

With all this in mind, the next sections will cover the IoT with a view to emphasise its AI capabilities – starting, fittingly, with the very term the ‘AIoT’: Artificial Intelligence of Things.

The AIoT: Artificial Intelligence of Things

While the argument could be made that the term AIoT is slightly redundant as there would have to be ‘AI’ for the ‘IoT’ to exist in the first place, the term is nevertheless a useful way to emphasise the union of the two technological concepts. It is the name for the collective utilisation of connected technologies (IoT sensors are a primary example of which) that gather and provide data to a cloud or edge-based artificial intelligence software. This software can reason and present its conclusion to other technologies (e.g. smartphones) and humans alike.

The enormous extent to which this data can be gathered, processed, and presented reflects just how applicable the AIoT is to mission-critical (or even just time-sensitive) applications. In fact, a strong example of an AIoT process takes place in modern satellite navigation (satnav) technology: for instance, every smartphone user who has Google Maps may help to inform Google’s calculation of a driver’s estimated time of arrival simply by having the app and being on the road.

Likewise, every account that Google has of the various speed limits, traffic lights, and of course number of vehicles on the road forms vital data. This, again, equips the given artificial intelligence with the capacity to reach a broad inference that can inform the decision making of both other AI systems and humans. Such an inference may be referred to as a COP, or common operating picture – more on which can be read on a previous IoT Insider article, ‘Defining the Internet of Things’.

The Cloud and the Edge

The Cloud and the Edge are not to be confused with one another, but it is worthwhile to consider them side by side to allow easier points of comparison. Their common ground is that they are both areas of the Internet where data storage, processing, and inferences can be made. Their main differences, however, are their level of latency and where they exist relative to the given end user and their device.

The Cloud is essentially an online system of data harnessing that relies on the remote servers of data centres. More specifically, when a person uses their device to process data at the Cloud, they are in fact using the Internet to have their data stored in data centre servers where it can be retrieved using the correct log-in details.

The downside, however, of this level of remote data storage is at least two-fold: it means both security risks and latency problems. The Edge largely exists to circumvent such pitfalls: it is the product of local – i.e. located ‘at the Edge’ – data points (such as IoT gateways) that exist closer to the end users to reduce both the cyber threats and the latency that would otherwise be prevalent in a more distantly remote system.

As a result of these differences between the Cloud and the Edge, IoT device manufacturers are becoming more and more reliant on the use of the Edge, particularly as modern data demands grow and grow without any signs of stopping.

HMIs: human-machine interfaces

As considered throughout this article, a key part of the Internet of Things is its increasing relevance to artificial intelligence and the capacity to inform the decision making of other AI technologies and their human operators. But at the forefront of this concept is the question of how various software and device users can each interact with the data that is presented to them in the IoT.

HMIs, or human-machine interfaces, is the collective name for the devices that will achieve such interaction. While they are self-explanatory (human-machine interfaces are any technologies that allow one or more human operators to interface with a machine), the problem of course is that one could argue that even a simple computer display monitor is an HMI. However, for the sake of this IoT-focused discussion, an HMI may best be exemplified by a virtual reality (VR) headset.

This is chiefly because our modern reliance on the IoT and VR is already becoming linked to the increasingly-realised concept of the Metaverse, wherein (for lack of a concrete definition) users interact with the Internet and virtual reality in seamless tandem. HMIs offer the very conduit by which this process can be realised, and as such, IoT-focused human-machine interfaces offer countless applications to not only consumers but industry itself (consider the term Industrial Internet of Things).

Just one of such applications is remote robotics: HMIs, in tandem with the IoT, is the means by which human operators can gather and input data with the very machinery that they control. VR headsets are becoming an increasingly common way that this can be achieved. And it is owing to the Internet of Things and human-machine interfaces that data can not only be obtained but outright utilised. With this in mind, human-machine interfaces allow the above terms – the AIoT, the cloud, and the edge – and their benefits to all be experienced in one human-readable headset.