Categories
Articles Artificial Intelligence

Visual AI Advances NLP & VLP, Learn How

AI research is becoming more advanced and much of that research revolves around the use of Artificial Intelligence (AI) in Natural Language Processing (NLP) and Visual Language Processing (VLP).

Both NLP and VLP can benefit from the recent advances made with visual AI technology. Such as being able to ingest images along with many other feature types into your dataset, which enables innovation in new ways by using diverse data types.

Various AI researchers have been wondering if these two technologies can be combined with each other and what the impact would be on AI research and future advancements.

This article will explain how visual AI technology is progressing and what benefits it provides to companies and consumers while also describing how NLP can benefit from the advancement of visual AI.

How Visual AI Advances NLP and VLP

When visual AI is combined with NLP or VLP the results are impressive. For example, when you have images to help inform your models it helps create a more in-depth understanding of your data. This increased understanding can help determine outcomes much more accurately as well as be used for image classification to understand what an image contains.

Visual AI is a technology that allows for unlimited possibilities with your datasets on any feature types you can imagine, including both NLP and VLP. This enables companies to get a more rounded perspective of their data without needing to rely solely on one type of data like text or images. For example, if a business is trying to find out information about who their customers are, they can use images along with any other feature type in their dataset.

This helps create more accurate customer profiles and uncover new features for more targeted outreach based on this data as well as make it easier for them to automate business processes using AI

Visual AI is a technology that is being used by both NLP and VLP researchers for their own purposes, such as Facebook’s DeepText and Google’s Smart Reply respectively.

Visual AI: The Future of Language Understanding

Visual language processing is a subset branch of an artificial intelligence field that focuses on the creation of new algorithms designed to allow computers to more accurately understand images and their contents.

While NLP and VLP are already a part of everyday life, with NLP being used in search engines like Google or Siri for voice recognition and VLP being used by autonomous vehicles like drones, the advances made in visual AI open up new possibilities. For example, imagine a drone that is able to identify and avoid hazards that are on its path.

The advancement of visual AI is also providing new possibilities for NLP and VLP with advancements in machine learning and computer vision helping create more advanced language understanding tools. As well as the development of algorithms that allow for autonomous fluid conversational systems, like Facebook’s DeepText.

This technology could play a huge role in future advancements by helping us move forward with new tools and capabilities that help further the ability of AI in areas like customer service, healthcare, or other fields.

The Power of Visual Language Processing

Imagine a world where you can create an entirely new operating system just by describing it out loud. That’s what happened to Google Research Programmer Jacob Schreiber when he was working with Google’s Smart Reply Technology and using that technology to understand how to build a basic version of Chrome OS in just his voice. And that’s just one example of the possibilities that visual language processing can bring in the future.

Visual language processing provides a host of advantages over text alone, such as allowing for an increase in understanding accuracy through contextual information and providing additional examples to base machine learning on. For example, instead of relying only on text or images to create a self-driving car, customers can provide data about what they do in a day along with images or videos of the routes they take.

Visual language processing also provides tools and techniques that make developing these types of systems more accessible even to non-programmers thanks to advancements in natural language understanding (NLU). This allows you to create conversational interfaces for your programs without needing to have a background in programming.

How is Visual AI Being Used?

The biggest area that visual language processing that has been used so far is in customer service, where it can be used to monitor and analyze social media posts for sentiment analysis or automatically follow up on text messages without the need for human intervention. The technology can also be

There are specific industries that have already adopted visual AI to great effect, including gaming, healthcare, and customer service.

Gaming

One of the most popular examples of how these types of systems are being used is found in video games. Microsoft’s research into using visuals combined with NLP for understanding player dialogue has given them a huge leg up in the gaming industry.

Their new game State of Decay 2 provides an example of how this technology can be used, with players able to talk with any character in the game as if they were talking to a real person thanks to Microsoft’s AI advancements.

Healthcare

Visual language processing has also been developed into software that is able to do analysis on medical images like MRIs for doctors, saving them from having to spend a lot of time looking through and analyzing thousands of images.

Customer Service

One of the biggest areas that visual language processing that has been used so far is in customer support operations. Some examples include companies like Taskbob being able to provide customer support with little or no human input by automatically following up on text messages from customers and companies like Aira being able to provide the blind with information about their surroundings through images that are taken of objects around them.

What’s Next?

The advancement of visual language processing will continue to benefit NLP and VLP, as well as helping develop automated systems that can help us with customer service, healthcare, and much more.

As technology continues to improve and become more accessible, it will make the job of developing these types of applications easier even for people without a background in programming or machine learning. The days where NLP was limited to text alone are almost gone thanks to advancements in visual language processing that now allow us to have a better understanding of the world around us. To learn more about Visual Ai and how it is impacting the industry, reach out and Speak with Martin at NextGen today.

Categories
Articles Artificial Intelligence Cyber Security

Healthcare Cyber Attacks to Medical Devices, EMR Apps, and Cloud

Embracing next technology healthcare without adequate preparation will only open new risk avenues and threat vectors for healthcare cyber attacks.  Technology is perceived as a solution to address operational inefficiencies within the healthcare industry and to expand the reach of high quality healthcare services to remote regions. But the risks are mounting.

Vulnerable Devices for Critical Medical Practices

The proliferation of smart technologies will encompass the healthcare industry in coming years. Digital devices such as smart pacemakers and insulin pumps are used widely today, and the next generation of smart technologies will cover a variety of critical cardiovascular, respiratory, and neurological medical practices. However, next technology healthcare devices aren’t immune to sophisticated attacks. In control of malicious actors, vulnerable smart medical devices can deliver the killer blow to patients instead of maintaining stable health.

Cloud Vulnerabilities for Healthcare Cyber Attacks

Cloud connectivity is critical to access patient information anywhere-anytime, a promise that’s driving transition to the cloud for healthcare institutions. PHI data is effectively stored in off-site data centers beyond the control of healthcare providers originally in charge of maintaining patient data privacy and security. Any vulnerability in their cloud networks is an open invitation for hackers to compromise sensitive patient information.

IoT Networking

Unlike cloud vendors subject to stringent compliance regulations, patients themselves are unable to secure IoT-connected medical devices at home. A malware infected dialysis machine could be part

of a DDoS attack intended to bring down the entire network infrastructure of a hospital. Since IoT devices come from multiple vendors, through different processes and offer different technologies, it’s not entirely possible to maintain a consistent standard and control around healthcare cyber attacks and IoT device security.

Next Technology Healthcare Cyber Attacks to Mobile Apps

Healthcare providers adopting telemedicine practices using smartphone health apps may not realize or control the personally identifiable information shared with third-party advertisers. These apps run on mobile platforms vulnerable to security threats, especially when the OS is not updated to apply the latest available security patches.

Considering the general lack of security awareness among patients using outdated mobile app and OS versions, and fall prey to mundane social engineering ploys, the industry has a long way to go before considering mobile apps as secure channels to offer effective firewalls and security against healthcare cyver attacks.

Do you think the next technology healthcare industry is ready to take a deep dive into cyber security adoption without adequate preparation and fixing loopholes that exist within the technology itself?

Recruiting expertise in medical devices and electronic health records

Need an executive search consultant with deep knowledge and contacts in the medical field?  NextGen has identified and recruited key personnel ranging from principal / chief engineers in software development, systems design, and embedded wireless to directors and VPs in sales, business development, and technology to president of business unit for medical device manufacturers, electronic health records developers, clinical integration, and bio medical research and development.

Categories
Articles Artificial Intelligence

Ai in Nanotechnology for Biomedical Usage

Nanotechnology has been slowly treading into the field of biomedicine for almost a decade now. Owing to the fact that nanotechnology for biomedical usage is still a relatively newer technology surrounded by many ethical debates, its footsteps are a little slow and careful. So what is nanotechnology? As the name would suggest, it is the putting of nanotechnology to medicinal usage and that is where aI – aka artificial intelligence comes to light.

You can put about a thousand nano-particles side by side in the cross-section of a singular hair and disseminate them into the bloodstream to be in motion with the same fluidity as a red blood cell.  Many biomedical scientists and researchers have managed to apply nanotechnology productively. In 2016, a DNA nanorobot was created for targeted drug delivery in cancerous cells. The National Center for Nanoscience and Technology, Beijing, China recently created a bactericidal nanoparticle that carried an antibiotic and successfully suppressed a bacterial infection in mice.

However, the most remarkable innovation in this field was in 2017, when biomedical engineers designed and created small-scale locomotive robots mimicking the structure, mobility, and durability of red-blood cells. These nanobots developed by AI architects exhibit the ability to swim, climb, roll, walk, jump over and crawl in between the liquid or solid terrains inside the human body. Scientists expect that with the creation of these nanobots, they will be able to freely circulate around the body, diagnose malfunctions, deliver drugs to the disease, and report back by lighting up while performing their drug delivery.

As amazing as that may sound, many find it equally as invasive; hence the ethical debates surrounding nanomedicine.  However, taking a completely neutral stance, we will try to give the readers a brief overview of what Ai in nanotechnology for biomedical usage is all about, what strides it has made and where it stands currently.

​​​​

NanoTechnology for Biomedical Usage Methods

 

 

Owing to these characteristics, nano-particles have found their effective uses in the medicinal field. Some of these Ai in nanotechnology for biomedical usage methods include the following:

  1. Targeted drug delivery and consequentially minimal side-effects of treatments.
  2. Tissue regeneration and replacement, for example, implanting coatings, regenerating tissue scaffolds, repairing bones via structural implantation
  3. Implanting diagnostic and assessment devices, nano-imaging, nano-pores, artificial binding sites, quantum dots etc.
  4. Implanting aid like retina or cochlear implants
  5. Non-invasive surgical nano-bots

Ai-in-NanoTechnology-for-Biomedical-Usage-MethodsThis involves nano-particles that are constructed of immune-system-friendly materials, implanted with drugs and sent to the targeted areas of the body. Owing to their small size, they can effectively target only the areas that are disease-ridden; dysfunctional parts of the cells as opposed to the entire cells, or whole organs.

This essentially means minimal side-effects because it lowers healthy cell damage. This can be demonstrated by the example of NCNST creating nano-robots that carried a blood-coagulating enzyme called Thrombin.

These thrombin-carrying nano-particles were then sent to tumor cells, essentially cutting off tumor blood supply. Another example of drug delivery using nanoparticles is of CytImmune, a leading diagnostic company that used nanotechnology for precision-based delivery of chemotherapy drugs – it published the results of their first clinical trials, while the second one is underway. Many such methods of drug delivery are being used for cancer, heart diseases, mental diseases and even aging.

Regenerative Ai in NanoTechnology for Biomedical Usage

As per the National Institutes of Health, the procedure encompassing regenerative involves “creating live, practicable tissues to repair or replace tissues or organ functions lost because of a slew of reasons, which may be chronic disease, increasing age or congenital defects.”

Just as nano-bots mimic the structure of red blood cells, they can mimic the function of auto-immune cells and antibodies in order to aid the natural healing process. Because the natural cellular interaction takes place at a micro-scale level, nanotechnology can make its uses known in multiple different ways. Some of these include regeneration of bone, skin, teeth, eye-tissue, nerve cells and cartilages.  Ai is able to collect and direct and modify regenerations.

You can read about the Ai in nanotechnology for biomedical usage based cell repair by in the following article; The Ideal Gene Delivery Vector: Chromalloytes, Cell Repair Nanorobots for Chromosome Repair Therapy.  While such a powerful and innovative technology has its innumerable advantages in the medical field, it must be used within certain ethical perimeters for long-term applicability. Nano-technology brings with it many risks that need to be kept in mind by researchers.  If you need help to identify and recruit senior executives or functional leaders in artificial intelligence technology, consider the experienced team at NextGen Global Executive Search.

Categories
Articles Artificial Intelligence

Augmented Reality Virtual Elements to Physical World

Augmented reality virtual elements, virtual reality, artificial intelligence- exactly what are they and how do they interact with one another? Every moment of our waking lives, we use our five senses to learn about our world. In our daily reality, we see people and cars moving on the street, or hear a colleague talking with a client in the next cubicle. We can smell something burning or peculiar fish smells or our morning bacon cooking. Our senses can tell us a lot — but we may still be missing some very important information. If today’s innovators have their way, augmented reality virtual elements will soon fill in those sensory gaps for us.

 

Four Categories of Augmented Reality Virtual Elements

 

An online guide to augmented reality describes four different categories of AR. Marker-based AR (also called Image Recognition) can determine information about an object using something called a QR/2D code. It uses a visual marker. Markerless AR is location-based or position-based. GPS devices might fit into this category. Projection-based AR projects artificial light onto real world surfaces. And superimposition-based AR puts a virtual object into a real space, such as IKEA’s software that lets you see how a couch might look in your living room.

Augmented Reality devices in various stages of development include:

  • sensors and cameras
  • projectors
  • eyeglasses
  • heads-up display (HUD)
  • contact lenses
  • virtual retinal display (VRD)
  • handheld

A Second Intelligence

Your curiosity about this subject is a sign of your own intelligence, but computing machines offer us something different. Artificial intelligence (AI) uses the computing power of machines to perform tasks that are normally associated with intelligent beings. Those tasks include activities related to perception, learning, reasoning, and problem solving. AI can add to our personal experience through something called augmented reality (AR).

We should not confuse the two terms, although they are related. You might compare them to what we know as perception and reason in human beings. We perceive the world through our five senses, but we interpret those perceptions through our reasoning powers. Augmented reality uses devices like smart glasses and handheld devices to provide us with more data and add to our perceptions, but it is artificial intelligence that makes sense of all that information.

What is Augmented Reality virtual elements without AI? It is like eyes without a brain. Tyler Lindell is an AI/ AR/ VR software engineer for Holographic Interfaces, as well as a software engineer at Tesla. In an article called “Augmented Reality Needs AI In Order To Be Effective”, he says that most people don’t realize that “AI and machine learning technologies sit at the heart of AR platforms”.

Another Set of Eyes and Ears

There are some larger questions about the meaning of intelligence and the role of computers that are always good to trigger research and deep conversations. I have written about the history of artificial intelligence and whether machines can actually think. Recently I took another look at J.C.R. Licklider’s vision for man-computer symbiosis. But for those in the business world or in a production environment, you may just want to know what these technologies can do. An article from Lifewire tells us that augmented reality “enriches perception by adding virtual elements to the physical world”.

Just as our eyes and ears need the brain to interpret the sights and sounds that are presented to us, Augmented reality virtual elements depends on AI to provide pertinent information to the user in real time. Imagine taking a walk through the city. You see buildings and landmarks. If you looked through an AR device, it could give you more information, such as the name or address of the building, or some history about the landmark.

Technology in Transition

The potential of augmented reality virtual elements backed by artificial intelligence is only now being realized in the marketplace. Tech evangelist Robert Scoble and his co-author Shel Israel believe that we are only in the beginning stages of technological development that will have an enormous impact.  In their 2016 book The Fourth Transformation: How Augmented Reality & Artificial Intelligence Will Change Everything, they say that we are on the cusp of a new stage. The four “transformations” in their theory can be summarized with these headings:

  • Text and MS-DOS
  • Graphical user interfaces
  • Small devices
  • Augmented reality

The technological revolution is already underway. Google’s experiment with smart glasses was an early entry into the consumer AR market. Now augmented reality is being introduced into a broad spectrum of industries, from construction to military. IKEA and other retailers have seen the value of augmenting the views of customers who may potentially place furniture into their homes. Architects and builders are using AR to visualize how new construction might fit into current settings. AR solutions are being developed for technicians in a variety of fields to get analytics in real time. Soldiers with AR visors will be able to get battlefield data as fighting occurs.

The Ironman movies from Marvel Comics give us an illustration of augmented reality. In his high-tech suit, the character Tony Stark sees constantly changing data that he would never have perceived on his own. An artificial intelligence in the suit searches its vast data sources and offers split-second assessments based on immediate events. Like Ironman, AR devices in the coming years will be highly dependent on AI and its resources to aid us in our tasks

Challenges in Augmented Reality Virtual Elements

 

It takes a while for applied science to catch up with the imaginations of science fiction. There are such limitations as physics that prevent the speedy invention and implementation of the devices on our wish list. The flip mobile phone reminded some people of Captain Kirk’s communicator, but it took a lot of technology to get us there. Ironmen’s augmented reality has a lot more challenges.A short cartoon posted by The Atlantic shows how augmented reality will change tech experiences.

Augmented-Reality-virtual-elements-with-food-300x201The company Niantec offers a smartphone app that gives you information about the places you visit. “The application was designed to run in the background and just to pop up,” says the narrator.

The next Niantec project was Pokémon GO, an augmented reality game that went viral. The company’s CEO, John Hanke, says that “AR is the spiritual successor to the smartphone that we know and love today.” However clever our ideas, the obstacles can be overwhelming.  What happens when Ironman or Captain Kirk lose connectivity? How much bandwidth is required to transmit all that data, and what do we do when transmission channels become congested?

How can AI access the pertinent data quickly enough to be helpful when we need it? And how can we manage all that information?

 

Conclusion

 

There are so many potential use cases for augmented reality that go beyond the scope of this article. In the hands of police, the military, or rescue personnel, AR devices could help catch criminals, win battles, or save lives. Devices embedded with image and speech recognition capabilities could become our eyes and ears. Repairmen could use AR to find leaks or diagnose defective equipment. The wonders of augmented reality virtual elements, along with artificial intelligence, will become much more apparent to us in the next few years.

 

 

Categories
Aerospace - Aviation Articles Artificial Intelligence

Ai Impacts Aerospace Power Management Systems

Ai impacts aerospace power management in the way it can collect data and make decisions on conversion, generation, and distribution.  In our modern technological society, controlling the flow of electricity is necessary to powering buildings, maintaining efficient computer systems, and providing energy to vehicle accessories. And it is critical to operating systems on airplanes and spacecraft. Engineers are turning to efficient design to conserve and control power by looking to how Ai impacts aerospace power management systems for smart solutions.

Perhaps the best example of this quest for improved aerospace technology through Ai is being done at Carnegie Mellon University. In 2015, The Boeing Company joined with the university to establish the Boeing/Carnegie Mellon Aerospace Data Analytics Lab. Boeing’s CIO called it “a unique aerospace partnership”.  And the company sank $7.5 million into the project.

 

​​​​​​​​Boeing studies Ai impacts Aerospace Power Management

 

“The goal is to find ways to use artificial intelligence and big data to capitalize on the enormous amount of data generated in the design, construction and operation of modern aircraft,” according to a Carnegie Mellon news release. The author Byron Spice writes that aircraft are constantly generating data.  He calls aeronautics “one of the most data-intensive industries”.

In coverage of this partnership, Wired Magazine proclaimed:  “And now, Ai invades the skies.” James Carbonell, project leader and a computer scientist at the university, sees great promise in this endeavor.  “We’re working to develop algorithms that can process all that, understand it, and create a unified way of analyzing information,” he said.

The implementation of Ai impacts aerospace power management systems on airplanes and space ships extends to all areas and subsystems. Just as car makers have entrusted much of the decision-making to onboard computers, the aerospace industry is installing smart technology into air and space vehicles. In fact, the European Space Agency (ESA) is developing space applications for the same Controller Area Network (CAN) technology being used in automobiles. In the ESA paper “Artificial Intelligence for Space Applications”, the authors identify the subsystems of a spacecraft, all of which may be guided by Ai:

Where Ai Impacts Aerospace Power Management

 

  • attitude determination and control
  • telemetry tracking and command
  • command and data handling
  • power
  • thermal structures and mechanisms
  • guidance and navigation

Load Shedding and Ai

 

 So, what can artificial intelligence do to improve the power subsystem, both in planes and spacecraft?  Perhaps the most important task in learning how Ai impacts aerospace power management is making sure you’ve got enough to get home safely. And to do that, sometimes you must turn off everything except the most critical of systems. In aviation — as well as in the electric power industry — that process is called “load shedding”.

Ai-impacts-aerospace-power-management-300x220That’s how NASA brought the Apollo 13 crew home. Smart people used intelligent methods to limit the power consumption in the spacecraft to direct energy to where it was most needed. Many people credit the contracting firm Kepner-Tregoe and their problem analysis method for saving the astronauts. And who hasn’t seen the movie “Apollo 13 directed by Ron Howard?

In dramatic fashion, astronauts in a mock lunar module simulated actions required to control the use of onboard power.  What if a computer system could make all those calculations and decisions for you? That’s the principle behind AI-based load shedding. One aviation blog defines load shedding as “reducing demands on the aircraft’s electrical system when part of that system fails”.

The author gives us three principles that apply to the process (which he believes will also work in load shedding our personal workload):

  • Know when to load-shed
  • Know what to load-shed
  • Know how to load-shed

The journal Air Facts says that using AI in the cockpit is nothing new. “In fact,” writes author John Zimmerman, “many pilots have been flying with very primitive forms of Ai for years, even if they didn’t realize it: autopilots, FADEC, and load-shedding electrical systems all use computer power to make intelligent decisions.”

​​​​Smart controllers/ smart software

 

Making aircraft and spacecraft smarter requires advancements in both hardware and software. Just as innovations in drones and unmanned vehicles are making strides, innovations for manned and unmanned aircraft continue to get show promise.

A power controller from Data Device Corp offers smart system management. A company spokesman says, “DDC’s new high-power density SSPC offers a reliable and efficient solution, optimized for aircraft mission systems that can benefit from the functionality provided by smart aerospace power management,

Space News writer Debra Warner tells how NASA is putting artificial intelligence into everything. In the article ”Beyond HAL: How artificial intelligence is changing space systems”, she quotes NASA scientist Kelly Fong:  “Work we are doing today focuses not so much on general intelligence but on trying to allow systems to be more independent, more self-reliant, more autonomous.”

Current Ai impact on aerospace power management systems may not be as smart as the HAL 9000 unit in the movie 2001: A Space Odyssey. But the smart software being developed and used today is still capable of predictive analytics that could help prevent future disasters like those experienced in the Apollo and Challenger space programs.

Conclusion

Of course, how AI impacts aerospace power management systems in other ways besides load shedding. Just as the electric smart grid keeps the lights on, intelligent power systems on planes and space ships can keep pilots, astronauts, and passengers moving toward the completion of their journey. Whether it’s improved power distribution, error control, load shedding, or guarding against disaster, artificial intelligence shows great promise for continued advancement in aerospace system control. It seems that we are just getting started.

Categories
Articles Artificial Intelligence Internet of Things Wireless Ecosystems

Smart Objects: Blending Ai into the Internet of Things

It’s been more than a decade since the time when the number of internet-connected devices exceeded the number of people on the planet. This milestone signaled the emergence and rise of the Internet of Things (IoT) paradigm, smart objects, which empowered a whole new range of applications that leverage data and services from the billions of connected devices.  Nowadays IoT applications are disrupting entire sectors in both consumer and industrial settings, including manufacturing, energy, healthcare, transport, public infrastructures and smart cities.

Evolution of IoT Deployments

 

During this past decade IoT applications have evolved in terms of size, scale and sophistication. Early IoT deployments involved the deployment of tens or hundreds of sensors, wireless sensor networks and RFID (Radio Frequency Identification) systems in small to medium scale deployments within an organization. Moreover, they were mostly focused on data collection and processing with quite limited intelligence. Typical examples include early building management systems that used sensors to optimize resource usage, as well as traceability applications in RFID-enabled supply chains.

Over the years, these deployments have given their place to scalable and more dynamic IoT systems involving many thousands of IoT devices of different types known as smart objects.  One of the main characteristic of state-of-the-art systems is their integration with cloud computing infrastructures, which allows IoT applications to take advantage of the capacity and quality of service of the cloud. Furthermore, state of the art systems tends to be more intelligent, as they can automatically identify and learn the status of their surrounding environment to adapt their behavior accordingly. For example, modern smart building applications are able to automatically learn and anticipate resource usage patterns, which makes them more efficient than conventional building management systems.

Overall, we can distinguish the following two phases of IoT development:

  • Phase 1 (2005-2010) – Monolithic IoT systems: This phase entailed the development and deployment of systems with limited scalability, which used some sort of IoT middleware (e.g., TinyOS, MQTT) to coordinate some tens or hundreds of sensors and IoT devices.
  • Phase 2 (2011-2016) – Cloud-based IoT systems: This period is characterized by the integration and convergence between IoT and cloud computing, which enabled the delivery of IoT applications based on utility-based models such as Platform-as-a-Service (PaaS) and Software-as-a-Service (SaaS). During this phase major IT vendors such as Amazon, Microsoft and IBM have established their own IoT platforms and ecosystems based on their legacy cloud computing infrastructures. The latter have alleviated the scalability limitations of earlier IoT deployments, which provided opportunities for cost-effective deployments. At the same time the wave of Big Data technologies have opened new horizons in the ability of IoT applications to implement data-driven intelligence functionalities.

 

AI: The Dawn of Smart Objects using IoT applications

 

 

Despite their scalability and intelligence, most IoT deployments tend to be passive with only limited interactions with the physical world. This is a serious set-back to realizing the multi-trillion value potential of IoT in the next decade, as a great deal of IoT’s business value is expected to stem from real-time actuation and control functionalities that will intelligently change the status of the physical world.

Smart-Objects-blending-Ai-into-IoTIn order to enable these functionalities we are recently witnessing the rise and proliferation of IoT applications that take advantage of Artificial Intelligence and Smart Objects.  Smart objects are characterized by their ability to execute application logic in a semi-autonomous fashion that is decoupled from the centralized cloud.

In this way, they are able to reason over their surrounding environments and take optimal decisions that are not necessarily subject to central control. Therefore, smart objects can act without the need of being always connected to the cloud. However, they can conveniently connect to the cloud when needed, in order to exchange information with other passive objects, including information about their state / status of the surrounding environment.

Prominent examples of smart objects follow:

  • Socially assistive robots, which provide coaching or assistance to special user groups such as elderly with motor problems and children with disabilities.
  • Industrial robots, which complete laborious tasks (e.g., picking and packing) in warehouses, manufacturing shop floors and energy plants.
  • Smart machines, which predict and anticipate their own failure modes, while at the same time scheduling autonomously relevant maintenance and repair actions (e.g., ordering of spare parts, scheduling technicians visits).
  • Connected vehicles, which collect and exchange information about their driving context with other vehicles, pedestrians and the road infrastructure, as a means of optimizing routes and increasing safety.
  • Self-driving cars, which will drive autonomously with superior efficiency and safety, without any human intervention.
  • Smart pumps, which operate autonomously in order to identify and prevent leakages in the water management infrastructure.

The integration of smart objects within conventional IoT/cloud systems signals a new era for IoT applications, which will be endowed with a host of functionalities that are hardly possible nowadays. AI is one of the main drivers of this new IoT deployment paradigm, as it provides the means for understanding and reasoning over the context of smart objects. While AI functionalities have been around for decades with various forms (e.g., expert systems and fuzzy logic systems), AI systems have not been suitable for supporting smart objects that could act autonomously in open and dynamic environments such as industrial plants and transportation infrastructures.

This is bound to change because of recent advances in AI based on the use of deep learning that employs advanced neural networks and provides human-like reasoning functionalities. During the last couple of years we have witnessed the first tangible demonstrations of such AI capabilities applied in real-life problems. For example, last year, Google’s Alpha AI engine managed to win a Chinese grand-master in the Go game. This signaled a major milestone in AI, as human-like reasoning was used instead of an exhaustive analysis of all possible moves, as was the norm in earlier AI systems in similar settings (e.g., IBM’s Deep Blue computer that beat chess world champion Garry Kasparov back in 1997).

Implications of AI and IoT Convergence for Smart Objects

 

This convergence of IoT and AI signals a paradigm shift in the way IoT applications are developed, deployed and operated. The main implications of this convergence are:

  • Changes in IoT architectures: Smart objects operate autonomously and are not subject to the control of a centralized cloud. This requires revisions to the conventional cloud architectures, which should become able to connect to smart objects in an ad hoc fashion towards exchanging state and knowledge about their status and the status of the physical environment.
  • Expanded use of Edge Computing: Edge computing is already deployed as a means of enabling operations very close to the field, such as fast data processing and real-time control. Smart objects are also likely to connect to the very edge of an IoT deployment, which will lead to an expanded use of the edge computing paradigm.
  • Killer Applications: AI will enable a whole range of new IoT applications, including some “killer” applications like autonomous driving and predictive maintenance of machines. It will also revolutionize and disrupt existing IoT applications. As a prominent example, the introduction of smart appliances (e.g., washing machines that maintain themselves and order their detergent) in residential environments holds the promise to disrupt the smart home market.
  • Security and Privacy Challenges: Smart objects increase the volatility, dynamism and complexity of IoT environments, which will lead to new cyber-security challenges. Furthermore, they will enable new ways for compromising citizens’ privacy. Therefore, new ideas for safeguarding security and privacy in this emerging landscape will be needed.
  • New Standards and Regulations: A new regulatory environment will be needed, given that smart objects might be able to change the status of the physical environment leading to potential damage, losses and liabilities that do not exist nowadays. Likewise, new standards in areas such as safety, security and interoperability will be required.
  • Market Opportunities: AI and smart objects will offer unprecedented opportunities for new innovative applications and revenue streams. These will not be limited to giant vendors and service providers, but will extend to innovators and SMBs (Small Medium Businesses).

Future Outlook

 

AI is the cornerstone of next generation IoT applications, which will exhibit autonomous behavior and will be subject to decentralized control. These applications will be driven by advances in deep learning and neural networks, which will endow IoT systems with capabilities far beyond conventional data mining and IoT analytics. These trends will be propelled by several other technological advances, including Cyber-Physical Systems (CPS) and blockchain technologies. CPS systems represent a major class of smart objects, which will be increasingly used in industrial environments.

They are the foundation of the fourth industrial revolution through bridging physical processes with digital systems that control and manage industrial processes. Currently CPS systems feature limited intelligence, which is to be enhanced based on the advent and evolution of deep learning. On the other hand, blockchain technology (inspired by the popular Bitcoin cryptocurrency) can provide the means for managing interactions between smart objects, IoT platforms and other IT systems at scale. Blockchains can enable the establishment, auditing and execution of smart contracts between objects and IoT platforms, as a means of controlling the semi-autonomous behavior of the smart object.

This will be a preferred approach to managing smart objects, given that the latter belong to different administrative entities and should be able to interact directly in a scalable fashion, without a need to authenticating themselves against a trusted entity such as a centralized cloud platform.

In terms of possible applications the sky is the limit. AI will enable innovative IoT applications that will boost automation and productivity, while eliminating error prone processes.  Are you getting ready for the era of AI in IoT?

 

Charles Moore

Charles Moore

DX / CX / CDP IoT & 5G Wireless
How to Evaluate an Executive Search Firm
Receive the latest news

Subscribe To Our Newsletter

Get notified about new articles, videos, seminars and all the breaking industry news as it happens