Augmented reality virtual elements, virtual reality, artificial intelligence- exactly what are they and how do they interact with one another? Every moment of our waking lives, we use our five senses to learn about our world. In our daily reality, we see people and cars moving on the street, or hear a colleague talking with a client in the next cubicle. We can smell something burning or peculiar fish smells or our morning bacon cooking. Our senses can tell us a lot — but we may still be missing some very important information. If today’s innovators have their way, augmented reality virtual elements will soon fill in those sensory gaps for us.
A Second Intelligence
Your curiosity about this subject is a sign of your own intelligence, but computing machines offer us something different. Artificial intelligence (AI) uses the computing power of machines to perform tasks that are normally associated with intelligent beings. Those tasks include activities related to perception, learning, reasoning, and problem solving. AI can add to our personal experience through something called augmented reality (AR).
We should not confuse the two terms, although they are related. You might compare them to what we know as perception and reason in human beings. We perceive the world through our five senses, but we interpret those perceptions through our reasoning powers. Augmented reality uses devices like smart glasses and handheld devices to provide us with more data and add to our perceptions, but it is artificial intelligence that makes sense of all that information.
What is Augmented Reality virtual elements without AI? It is like eyes without a brain. Tyler Lindell is an AI/ AR/ VR software engineer for Holographic Interfaces, as well as a software engineer at Tesla. In an article called “Augmented Reality Needs AI In Order To Be Effective“, he says that most people don’t realize that “AI and machine learning technologies sit at the heart of AR platforms”.
Another Set of Eyes and Ears
There are some larger questions about the meaning of intelligence and the role of computers that are always good to trigger research and deep conversations. I have written about the history of artificial intelligence and whether machines can actually think. Recently I took another look at J.C.R. Licklider’s vision for man-computer symbiosis. But for those in the business world or in a production environment, you may just want to know what these technologies can do. An article from Lifewire tells us that augmented reality “enriches perception by adding virtual elements to the physical world”.
Just as our eyes and ears need the brain to interpret the sights and sounds that are presented to us, Augmented reality virtual elements depends on AI to provide pertinent information to the user in real time. Imagine taking a walk through the city. You see buildings and landmarks. If you looked through an AR device, it could give you more information, such as the name or address of the building, or some history about the landmark.
Four Categories of Augmented Reality Virtual Elements
An online guide to augmented reality describes four different categories of AR. Marker-based AR (also called Image Recognition) can determine information about an object using something called a QR/2D code. It uses a visual marker. Markerless AR is location-based or position-based. GPS devices might fit into this category. Projection-based AR projects artificial light onto real world surfaces. And superimposition-based AR puts a virtual object into a real space, such as IKEA’s software that lets you see how a couch might look in your living room.
Augmented Reality devices in various stages of development include:
- sensors and cameras
- heads-up display (HUD)
- contact lenses
- virtual retinal display (VRD)
Technology in Transition
The potential of augmented reality virtual elements backed by artificial intelligence is only now being realized in the marketplace. Tech evangelist Robert Scoble and his co-author Shel Israel believe that we are only in the beginning stages of technological development that will have an enormous impact. In their 2016 book The Fourth Transformation: How Augmented Reality & Artificial Intelligence Will Change Everything, they say that we are on the cusp of a new stage. The four “transformations” in their theory can be summarized with these headings:
- Text and MS-DOS
- Graphical user interfaces
- Small devices
- Augmented reality
The technological revolution is already underway. Google’s experiment with smart glasses was an early entry into the consumer AR market. Now augmented reality is being introduced into a broad spectrum of industries, from construction to military. IKEA and other retailers have seen the value of augmenting the views of customers who may potentially place furniture into their homes. Architects and builders are using AR to visualize how new construction might fit into current settings. AR solutions are being developed for technicians in a variety of fields to get analytics in real time. Soldiers with AR visors will be able to get battlefield data as fighting occurs.
The Ironman movies from Marvel Comics give us an illustration of augmented reality. In his high-tech suit, the character Tony Stark sees constantly changing data that he would never have perceived on his own. An artificial intelligence in the suit searches its vast data sources and offers split-second assessments based on immediate events. Like Ironman, AR devices in the coming years will be highly dependent on AI and its resources to aid us in our tasks
Challenges in Augmented Reality Virtual Elements
It takes a while for applied science to catch up with the imaginations of science fiction. There are such limitations as physics that prevent the speedy invention and implementation of the devices on our wish list. The flip mobile phone reminded some people of Captain Kirk’s communicator, but it took a lot of technology to get us there. Ironmen’s augmented reality has a lot more challenges.A short cartoon posted by The Atlantic shows how augmented reality will change tech experiences.
The company Niantec offers a smartphone app that gives you information about the places you visit. “The application was designed to run in the background and just to pop up,” says the narrator.
The next Niantec project was Pokémon GO, an augmented reality game that went viral. The company’s CEO, John Hanke, says that “AR is the spiritual successor to the smartphone that we know and love today.” However clever our ideas, the obstacles can be overwhelming. What happens when Ironman or Captain Kirk lose connectivity? How much bandwidth is required to transmit all that data, and what do we do when transmission channels become congested?
How can AI access the pertinent data quickly enough to be helpful when we need it? And how can we manage all that information?
There are so many potential use cases for augmented reality that go beyond the scope of this article. In the hands of police, the military, or rescue personnel, AR devices could help catch criminals, win battles, or save lives. Devices embedded with image and speech recognition capabilities could become our eyes and ears. Repairmen could use AR to find leaks or diagnose defective equipment. The wonders of augmented reality virtual elements, along with artificial intelligence, will become much more apparent to us in the next few years.
Cyber-security has always been a major concern for providers, vendors and operators of IT systems and services. Despite increased investments in security technology, this has not changed, as evident in several notorious cyber-attacks and related security incidents that have taken place during the last couple of years. It’s time to revolutionize cyber security artificial intelligence.
For instance, earlier this year, the global “WannaCry” ransomware attack has severely affected the operations of numerous organizations worldwide, including major organizations such as Britain’s National Health Service (NHS). “WannaCry” has manifested the potential scale and physical consequences of cyber-crime incidents, while confirming the importance of proper cyber security measures.
Beyond their financial and business implications, cyber-attacks have a significant socio-economic impact as well, as they reduce citizens’ and businesses trust in IT systems and services. This lack of trust is a major issue in an increasingly connected world and in an era where IT systems are a primarily vehicle for increased competitiveness and productivity. It’s therefore important to understand the factors that increase the number and scale of cyber security attacks, along with options for alleviating security incidents against IT infrastructures, such as phishing, botnets, ransomware and DDoS (Distributed Denial of Service) incidents.
Advanced Cyber Security Artificial Intelligence
Effective cyber-protection requires modern, advanced and intelligent cyber-security systems. The scale, complexity and sophistication of these systems are driven by the following factors:
Technology Evolution: The evolving technological complexity of cyber infrastructures renders their protection more challenging. For example, the rise and expanded use of Internet-of-Things (IoT) technologies provides cyber-crime opportunities based on the hacking of individual devices. Such hacking was hardly possible before the advent of the IoT paradigm. This is evident in the emergence of large scale IoT attacks, such as last year’s IoT-based massive Distributed Denial of Service (DDoS) attack that brought down the Dyn’s Domain Name System (DNS) and affected major internet sites like Twitter, Amazon and Spotify.
Complex Regulatory Environment: Nowadays, IT infrastructures’ operators and IT service providers need to adhere to quite complex regulatory requirements, including sector specific requirements (e.g., regulations for financial institutions) and general-purpose regulations such as EU’s general data protection regulation. The implementation of security policies and controls that address these regulatory requirements contributes to the rising complexity of cyber-security systems.
Convergence of Physical and Cyber Security: IT systems are increasingly becoming connected and interdependent to physical systems and processes. This is for example the case with most industrial organizations, which converge their cyber physical infrastructures as part of their digital transformation in the Industry 4.0 era. Industry 4.0 infrastructures in sectors like energy, manufacturing and oil & gas form large scale cyber-physical systems. This cyber-physical nature leads gradually to a convergence of physical security and cyber-security measures and policies towards greater effectiveness and economies of scale. Converged cyber and physical security measures are more appropriate for identifying and mitigating complex, asymmetric security incidents, which are likely to attack both cyber and physical systems at the same time. Overall, while this convergence is beneficial for industrial organizations, it leads to a widening complexity for the respective security systems.
New Business Models and Opportunities: The increased reliance of products and services on cyber infrastructures provides new business opportunities for providers of cyber-security solutions and services. As a prominent example, a new wave of cyber-insurance services is currently designed to support the emerging connected cars and semi-autonomous driving paradigms.
These include for example, insurance business models that consider IT-derived information about the driver’s behavior as a means of adapting the car insurance fees. Supporting these opportunities implies additional security measures concerning for example the secure and trustworthy transmission of information that supports them.
Paradigm Shifts in Cyber Security Artificial Intelligence
Confronting the recent wave of sophisticated cyber-attacks requires new approaches to threat identification, assessment and mitigation. Some of the main characteristics of these approaches, include:
- Integrated and holistic nature: Instead of protecting specific devices and IT systems, there is a need for holistic, cross-cutting mechanisms that span all the different layers of modern cyber-security infrastructures, including individual device, fog/edge computing nodes, as well as cloud infrastructures. The implementation of holistic, cross-cutting mechanisms must be driven by integrated approaches to threat modelling, which identify, assess and rate vulnerabilities/threats across all different layers of a cyber-infrastructure. Assessment and rating is a key to prioritizing the deployment of specific security measures at the most appropriate places of the infrastructure. This is very important given that organizations operate based on quite constrained budgets for IT security, which makes it impossible to provide full protection against all possible vulnerabilities.
- Intelligence and dynamism: To cope with the emerging complex, large scale, dynamic and asymmetric attacks, there is a need for intelligent and dynamic mechanisms that can correlate information from multiple sources to timely identify security incidents and vulnerabilities. In practice, this requires the deployment of advanced data-driven techniques to security identification and assessment, based on machine learning and data mining models that implement a data-driven approach to cyber-security.
- Adherence to latest security standards: Fortunately, security standards have been evolving in-line with the rising sophistication of cyber-security attacks. This means that adhere to standards can be a safe path to designing and deploying systems that support the above-mentioned holistic approach to cyber-security. Organizations are therefore implementing security standards from the popular ISO/IEC 27001 on Information security management to the Security Framework of the Industrial Internet Consortium for securing cyber infrastructures that support industrial processes.
- User Friendly and Human Centric: Novel approaches to cyber-security should consider the human factor, to alleviate the need for end-users to understand security systems and processes. This is particularly important for organizations (such as Small Medium Businesses), which lack the knowledge and equity needed to invest in security training of their personnel.
- New delivery models: Organizations are increasingly adopting new delivery models for security services, such as Managed Security Systems and Security-as-a-Service. These models obviate the need for on premise installations and enable enterprises to leverage security services in a flexible pay-as-you go fashion.
The implementation of solutions with the above-listed characteristics signals a paradigm shift in the way security is designed, deployed and provided. This shift is destined to increase the cyber-resilience of organizations, including large enterprises and SMBs.
How to Revolutionize Cyber Security with Artificial Intelligence
In quest for dynamic, intelligence and holistic cyber-security mechanisms, security experts are nowadays considering the employment of AI based mechanisms. This consideration is largely motivated by recent advances in deep neural learning and AI, which facilitate the identification of very complex patterns based on human like reasoning.
Relevant technology advances have empowered Google’s Alpha AI to defeat grandmasters in the Go game, which is considered a milestone in the evolution of AI technologies. Likewise, AI techniques can be used to detect and assess complex attack patterns, as a means of preventing or alleviating large scale security incidents such as “Wannacy”.
The idea to deploy or revolutionize cyber security with artificial intelligence can provide some compelling advantages, including:
- Detecting complex attacks: Deep learning techniques based on advanced neural networks enable the detection of non-conventional, non-trivial security incidents that can be hardly detected based on commonly used rules and conventional reasoning.
- Predictive Security Analytics: AI is a perfect enabler for predictive security, through employing predictive data analytics based on deep learning. This can enable a paradigm shift from reactive to predictive security. Based on predictive security, organizations can anticipate the occurrence of threats to timely prepare and apply proper mitigation strategies.
- Security Automation: AI systems can increase the automation of security measures, through triggering mitigation actions automatically, upon the detection of cyber-security threats. While human involvement is always necessary and desirable, one way to revolutionize cyber security with artificial intelligence is to increase security automation, while delivering advanced protection functionalities at a lower cost.
Despite these benefits, AI security implementations are still in their early stages. This is because there are several challenges to be addressed towards effective AI deployments. For example, there is a need for collecting and using large amounts of data, which are not always readily available. Therefore, AI systems are usually supported by the deployment of additional security monitoring probes, at the device, fog, edge and cloud layers of the cyber-security infrastructure.
Likewise, the effective deployment to revolutionize cyber security with artificial intelligence requires domain knowledge to avoid failures of the deep learning networks, such as failures due to overfitting on the training data.
Such domain knowledge requires the collaboration of security experts, data scientists and experts in field processes, which is not always easy to achieve. Finally, there is also a need for aligning the operation of AI-based security systems with the business objectives and security policies of the organization, which can be extremely challenging.
In order to alleviate these challenges, enterprises need to consistently collect and manage security datasets, while at the same time assembling a security team with proper skills including both data science and security expertise.
Moreover, they need to leverage emerging AI-based tools in order to revolutionize cyber security with artificial intelligence for extracting knowledge from datasets, such as TensorFlow and H2O.ai.
Finally, it’s good to adopt an incremental deployment approach, which boosts the acquisition of knowledge and experience in the AI field, while gradually meeting business objectives. As enterprises face unprecedented security challenges, new approaches are required. AI will be certainly among the most useful tools in organizations’ cyber-resilience arsenal. Despite early challenges, the best means to revolutionize cyber security with artificial intelligence are still to come.
It’s been more than a decade since the time when the number of internet-connected devices exceeded the number of people on the planet. This milestone signaled the emergence and rise of the Internet of Things (IoT) paradigm, smart objects, which empowered a whole new range of applications that leverage data and services from the billions of connected devices. Nowadays IoT applications are disrupting entire sectors in both consumer and industrial settings, including manufacturing, energy, healthcare, transport, public infrastructures and smart cities.
Evolution of IoT Deployments
During this past decade IoT applications have evolved in terms of size, scale and sophistication. Early IoT deployments involved the deployment of tens or hundreds of sensors, wireless sensor networks and RFID (Radio Frequency Identification) systems in small to medium scale deployments within an organization. Moreover, they were mostly focused on data collection and processing with quite limited intelligence. Typical examples include early building management systems that used sensors to optimize resource usage, as well as traceability applications in RFID-enabled supply chains.
Over the years, these deployments have given their place to scalable and more dynamic IoT systems involving many thousands of IoT devices of different types known as smart objects. One of the main characteristic of state-of-the-art systems is their integration with cloud computing infrastructures, which allows IoT applications to take advantage of the capacity and quality of service of the cloud. Furthermore, state of the art systems tends to be more intelligent, as they can automatically identify and learn the status of their surrounding environment to adapt their behavior accordingly. For example, modern smart building applications are able to automatically learn and anticipate resource usage patterns, which makes them more efficient than conventional building management systems.
Overall, we can distinguish the following two phases of IoT development:
- Phase 1 (2005-2010) – Monolithic IoT systems: This phase entailed the development and deployment of systems with limited scalability, which used some sort of IoT middleware (e.g., TinyOS, MQTT) to coordinate some tens or hundreds of sensors and IoT devices.
- Phase 2 (2011-2016) – Cloud-based IoT systems: This period is characterized by the integration and convergence between IoT and cloud computing, which enabled the delivery of IoT applications based on utility-based models such as Platform-as-a-Service (PaaS) and Software-as-a-Service (SaaS). During this phase major IT vendors such as Amazon, Microsoft and IBM have established their own IoT platforms and ecosystems based on their legacy cloud computing infrastructures. The latter have alleviated the scalability limitations of earlier IoT deployments, which provided opportunities for cost-effective deployments. At the same time the wave of Big Data technologies have opened new horizons in the ability of IoT applications to implement data-driven intelligence functionalities.
AI: The Dawn of Smart Objects using IoT applications
Despite their scalability and intelligence, most IoT deployments tend to be passive with only limited interactions with the physical world. This is a serious set-back to realizing the multi-trillion value potential of IoT in the next decade, as a great deal of IoT’s business value is expected to stem from real-time actuation and control functionalities that will intelligently change the status of the physical world.
In order to enable these functionalities we are recently witnessing the rise and proliferation of IoT applications that take advantage of Artificial Intelligence and Smart Objects. Smart objects are characterized by their ability to execute application logic in a semi-autonomous fashion that is decoupled from the centralized cloud.
In this way, they are able to reason over their surrounding environments and take optimal decisions that are not necessarily subject to central control. Therefore, smart objects can act without the need of being always connected to the cloud. However, they can conveniently connect to the cloud when needed, in order to exchange information with other passive objects, including information about their state / status of the surrounding environment.
Prominent examples of smart objects follow:
- Socially assistive robots, which provide coaching or assistance to special user groups such as elderly with motor problems and children with disabilities.
- Industrial robots, which complete laborious tasks (e.g., picking and packing) in warehouses, manufacturing shop floors and energy plants.
- Smart machines, which predict and anticipate their own failure modes, while at the same time scheduling autonomously relevant maintenance and repair actions (e.g., ordering of spare parts, scheduling technicians visits).
- Connected vehicles, which collect and exchange information about their driving context with other vehicles, pedestrians and the road infrastructure, as a means of optimizing routes and increasing safety.
- Self-driving cars, which will drive autonomously with superior efficiency and safety, without any human intervention.
- Smart pumps, which operate autonomously in order to identify and prevent leakages in the water management infrastructure.
The integration of smart objects within conventional IoT/cloud systems signals a new era for IoT applications, which will be endowed with a host of functionalities that are hardly possible nowadays. AI is one of the main drivers of this new IoT deployment paradigm, as it provides the means for understanding and reasoning over the context of smart objects. While AI functionalities have been around for decades with various forms (e.g., expert systems and fuzzy logic systems), AI systems have not been suitable for supporting smart objects that could act autonomously in open and dynamic environments such as industrial plants and transportation infrastructures.
This is bound to change because of recent advances in AI based on the use of deep learning that employs advanced neural networks and provides human-like reasoning functionalities. During the last couple of years we have witnessed the first tangible demonstrations of such AI capabilities applied in real-life problems. For example, last year, Google’s Alpha AI engine managed to win a Chinese grand-master in the Go game. This signaled a major milestone in AI, as human-like reasoning was used instead of an exhaustive analysis of all possible moves, as was the norm in earlier AI systems in similar settings (e.g., IBM’s Deep Blue computer that beat chess world champion Garry Kasparov back in 1997).
Implications of AI and IoT Convergence for Smart Objects
This convergence of IoT and AI signals a paradigm shift in the way IoT applications are developed, deployed and operated. The main implications of this convergence are:
- Changes in IoT architectures: Smart objects operate autonomously and are not subject to the control of a centralized cloud. This requires revisions to the conventional cloud architectures, which should become able to connect to smart objects in an ad hoc fashion towards exchanging state and knowledge about their status and the status of the physical environment.
- Expanded use of Edge Computing: Edge computing is already deployed as a means of enabling operations very close to the field, such as fast data processing and real-time control. Smart objects are also likely to connect to the very edge of an IoT deployment, which will lead to an expanded use of the edge computing paradigm.
- Killer Applications: AI will enable a whole range of new IoT applications, including some “killer” applications like autonomous driving and predictive maintenance of machines. It will also revolutionize and disrupt existing IoT applications. As a prominent example, the introduction of smart appliances (e.g., washing machines that maintain themselves and order their detergent) in residential environments holds the promise to disrupt the smart home market.
- Security and Privacy Challenges: Smart objects increase the volatility, dynamism and complexity of IoT environments, which will lead to new cyber-security challenges. Furthermore, they will enable new ways for compromising citizens’ privacy. Therefore, new ideas for safeguarding security and privacy in this emerging landscape will be needed.
- New Standards and Regulations: A new regulatory environment will be needed, given that smart objects might be able to change the status of the physical environment leading to potential damage, losses and liabilities that do not exist nowadays. Likewise, new standards in areas such as safety, security and interoperability will be required.
- Market Opportunities: AI and smart objects will offer unprecedented opportunities for new innovative applications and revenue streams. These will not be limited to giant vendors and service providers, but will extend to innovators and SMBs (Small Medium Businesses).
AI is the cornerstone of next generation IoT applications, which will exhibit autonomous behavior and will be subject to decentralized control. These applications will be driven by advances in deep learning and neural networks, which will endow IoT systems with capabilities far beyond conventional data mining and IoT analytics. These trends will be propelled by several other technological advances, including Cyber-Physical Systems (CPS) and blockchain technologies. CPS systems represent a major class of smart objects, which will be increasingly used in industrial environments.
They are the foundation of the fourth industrial revolution through bridging physical processes with digital systems that control and manage industrial processes. Currently CPS systems feature limited intelligence, which is to be enhanced based on the advent and evolution of deep learning. On the other hand, blockchain technology (inspired by the popular Bitcoin cryptocurrency) can provide the means for managing interactions between smart objects, IoT platforms and other IT systems at scale. Blockchains can enable the establishment, auditing and execution of smart contracts between objects and IoT platforms, as a means of controlling the semi-autonomous behavior of the smart object.
This will be a preferred approach to managing smart objects, given that the latter belong to different administrative entities and should be able to interact directly in a scalable fashion, without a need to authenticating themselves against a trusted entity such as a centralized cloud platform.
In terms of possible applications the sky is the limit. AI will enable innovative IoT applications that will boost automation and productivity, while eliminating error prone processes. Are you getting ready for the era of AI in IoT?
For years we have been hearing that driverless cars will soon be dominating our highways as autonomous vehicles are developed. Not since the advent of the horseless carriage have we been faced with such a disruption in personal travel.
But the adoption of vehicle automation will likely take longer than many have thought. There may be a more gradual increase over several years through several stages. That will come with further development in artificial intelligence to make vehicles more autonomous.
Science policy writer Jeffrey Mervis writes for Science online about six levels of auto autonomy. Level zero describes the cars that our fathers and grandfathers drove. There was no automation in those old Fords and Chevys, and early automobiles didn’t have automatic transmissions or power steering. Level five would apply to a vehicle in which everything is automated and there are no manual controls — even if a driver wanted to take over.
The industry target is level four, a scenario where drivers can take over control of automated vehicles under certain conditions (such as inclement weather). According to a table in the article, the cars we currently drive fit into level one, those in testing now are level two, but the limited automation of level three might never be deployed. Here’s a summary of the levels of automation according to Mervis:
- Level Zero: no automation
- Level One: driver controlled with adaptive cruise control and parking assistance
- Level Two: partial automation accelerates, brakes, or steers and connects to other vehicles (IoT)
- Level Three: conditional automation assumes near full control within limited parameters
- Level Four: everything automated under certain conditions
- Level Five: everything automated under all conditions
The point is that there is not necessarily a binary choice between cars with drivers and driverless cars. It is matter of degree. New cars already include assisted driving capabilities, such as parking, braking, and object avoidance. We are making progress, but an article in the MIT Technology Review claims that “Driverless Cars Are Further Away Than You Think”.
Contrasting Visions of Autonomous Vehicles
The role that automated vehicles will take remains a matter of debate. We have all heard about a possible future that includes fleets of driverless trucks or taxis. Some people look forward to relaxing, sleeping, reading, or other activities in an autonomous vehicle that automatically delivers us to our destination. An article in the Atlantic predicts a time when car rides across town might be free — so long as the rider makes a stop at one of the taxi’s sponsors.
Some even see the advent of driverless cars as a solution that leads us to a new utopia. Benefits would include fewer accidents, with reduced casualties and medical cost savings. Automated vehicles that communicate with each other would make commutes easier by eliminating traffic jams. The young, disabled, or elderly would be able to “drive” across town, giving them more independence. Police would be freed up to work other issues when DUIs and speeding offenses become a thing of the past. And there would be an elimination of bureaucracy related to the DMV and drivers licenses.
But not everyone agrees with this rosy view. Vehicle automation might not be so friendly to the environment. And there might be a case of the haves and the have nots when only the rich can afford to buy automated cars. Of course, automated vehicles will be subject to the same issues that affect computers, such as viruses and cybercrime. And then there is a matter that no one seems to be discussing: Some people just love to drive! Being stuck with an automated car can take away the sense of control and independence that we feel out on the open road. As any Nascar or Indy race fan will tell you, they aren’t giving up the thrill of control and acceleration to autonomous vehicles.
Other Automated Vehicles
The first thing we think of when someone mentions automated vehicles are the cars that we drive every day. But those are not the only ones. I remember visiting the Docklands area of London more than a decade ago.
It seemed funny that the driver on the Docklands Light Railway was sitting back and reading the newspaper. That setup is probably more common than we realize. You’ve probably also heard that airline pilots do very little these days, while the huge jets that they allegedly fly are automatically controlled by onboard computers.
Automation could be implemented in any of the transportation systems we use, and industrial systems seem to use as much automation as possible. Assembly lines in manufacturing plans are automated, and robotic vehicles often defy classification. Automation seems to be everywhere.
While writing this article I met an Arkansas farmer who told me his automation story. He said that the tractors that work the soil now do it all without the benefit of a driver. The old farmer told me that young fellows in central control rooms run the automated farm vehicles now. Some of the tractors don’t even have steering wheels — which means that the old farmer couldn’t drive them even if he wanted to.
Who’s In Control with Autonomous Vehicles?
When we drive down the highway in our private vehicles, we know who is in control. We know who is pressing the accelerator, who is turning the wheel, who is applying the brake. We know who is monitoring the gauges and adjusting the radio volume. We are. But autonomous vehicles of the future will take advantage of another form of intelligence to make decisions. Artificial intelligence (AI), according to Encyclopedia Britannica, is “the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings.” So-called driverless cars will actually be driven by artificial intelligence.
We already have many computers in our cars. These are called electronic control units (ECUs), and they manage vehicle systems such as engine, transmission, brakes, traction, and climate control. I wrote about it for Techopedia in an article called “Your Car, Your Computer: ECUs and the Controller Area Network”. AI will take this computerized control much further. Writer Colin Pawsey says that AI could be the driving force in autonomous vehicle development. “Autonomous technology is set to transform the motor industry, but there are no clear paths for manufacturers,” he writes. “As the autonomous mobility industry takes shape, artificial intelligence could play a much bigger role.”
There is no stopping the technological advances that shape our society. Market forces will continue to shape how science is applied, and the success of any technology is directly related to how widely it is adopted in industry or public use. It’s like test driving a new car. If you like it you will buy it, and if you really like it you’ll be sure to let all your friends know. The next decade should tell us a lot about how well people like autonomous vehicles and the artificial intelligence that drives them.