Skip to main content

IoT on Google Cloud Platform

Google wants people to use its Cloud Platform for connecting and managing IoT devices through IoT Core and use other GCP components like BigQuery to analyze data produced by those devices. While these products are fantastic, they also have some real world challenges.

IoT Core provides a managed service for connecting IoT devices. It talks with both HTTP and MQTT protocols and features one-click integration with Cloud PubSub easing most of the infrastructure tasks. However there are some limitations:

  1. You cannot use any random MQTT topic to send/receive messages as you would expect on a custom MQTT bridge. There are special topic formats to send messages and also to receive commands.
  2. IoT Core uses Public-Private Key cryptography to secure devices. All IoT devices must first authenticate using the Public Key in a JWT token and then start sending and receiving messages.
While these may seem like reasonable restrictions, one has to keep in mind that hardware vendors are still stuck in the 90's. Many of them use outdated languages and are reluctant to change any aspect of their devices script to make them talk to IoT Core. Vendors also want to move a subscription model where they charge a monthly fee to access the MQTT bridge hosted by them where their devices would send messages. As such their reluctance to adapt to IoT Core is much higher. The only real option left is to create a component which bridges the gap between hardware vendor's MQTT bridge and IoT Core which is a bad idea for so many reasons. Google must release a pre-approved list of vendors whose devices are compatible with IoT Core so that they can increase the adoption of GCP.

The other problem that one faces while using GCP is in building the UI. Most of IoT customers want real-time graphs for their devices and while Google does have Data Studio, it is not meant for real-time streaming data. To create a custom solution, one has to hook directly into PubSub and plot the streaming data received from there. For historical graphs, Data Studio along with BigQuery might be a option but one has to be very wary of costs which can escalate quickly while using BigQuery. Also the only way to embed a DataStudio graph in a web page is to use IFrame which leaves a very bad user experience. Google must provide a managed service for building charts (real-time or historical) which should be much better than what they currently provide with DataStudio.

GCP is a fantastic platform for building IoT solutions and if Google listens to its customers and provides a fully managed end-to-end solution for most scenarios, it will be a game changer and real challenger to Azure and AWS IoT solutions.

Comments

  1. Nice reading, I love your content. This is really a fantastic and informative post. Keep it up and if you are looking for Machine Learning Services then visit Emflair Technologies.

    ReplyDelete
  2. Nice reading, I love your content. This is really a fantastic and informative post. Keep it up and if you are looking for Iot Solutions then visit Neebal Technologies Put Ltd

    ReplyDelete

Post a Comment

As far as possible, please refrain from posting Anonymous comments. I would really love to know who is interested in my blog! Also check out the FAQs section for the comment policy followed on this site.

Popular posts from this blog

Creating a Smart Playlist

A few days earlier I was thinking that wouldn't it be nice if I had something which will automatically generate a playlist for me with no artists repeated. Also, it would be nice if I could block those artists which I really hate (like Himesh Reshammiya!). Since I couldn't find anything already available, I decided to code it myself. Here is the outcome -  This application is created entirely in .NET Framework 4/WPF and uses Windows Media Player Library as its source of information. So you have to keep your Windows Media Player Library updated for this to work. It is tested only on Windows 7/Vista. You can download it from here . UPDATE : You can download the Windows XP version of the application here . Please provide your feedback!

Integrating React with SonarQube using Azure DevOps Pipelines

In the world of automation, code quality is of paramount importance. SonarQube and Azure DevOps are two tools which solve this problem in a continuous and automated way. They play well for a majority of languages and frameworks. However, to make the integration work for React applications still remains a challenge. In this post we will explore how we can integrate a React application to SonarQube using Azure DevOps pipelines to continuously build and assess code quality. Creating the React Application Let's start at the beginning. We will use npx to create a Typescript based React app. Why Typescript? I find it easier to work and more maintainable owing to its strongly-typed behavior. You can very well follow this guide for jsx based applications too. We will use the fantastic Create-React-App (CRA) tool to create a React application called ' sonar-azuredevops-app '. > npx create-react-app sonar-azuredevops-app --template typescript Once the project creation is done, we

Serverless Generative AI: How to Query Meta’s Llama 2 Model with Microsoft’s Semantic Kernel and AWS Services

Generative AI is a type of artificial intelligence that can create new content such as text, images, music, etc. in response to prompts. Generative AI models learn the patterns and structure of their input training data by applying neural network machine learning techniques, and then generate new data that has similar characteristics. They are all the rage these days. 😀 Some types of generative AI include: Foundation models , which are complex machine learning systems trained on vast quantities of data (text, images, audio or a mix of data types) on a massive scale. Foundation models can be adapted quickly for a wide range of downstream tasks without needing task-specific training. Examples of foundation models are GPT, LaMDA and Llama . Generative adversarial networks (GANs) , which are composed of two competing neural networks: a generator that creates fake data and a discriminator that tries to distinguish between real and fake data. The generator improves its ability to fool the d