Chapter 4

Technologies

Today in more and more vehicles the sharing of computing, digital storage and networking and connectivity technology grows steadily. What makes vehicles so attractive for this growth in particular are the chances these technologies provide for example from in-vehicle infotainment, navigation and telematics, vehicle and engine control to parking and driving assistance. More and more innovations in the vehicle account of hardware and software-controlled computing platforms and networking and connectivity. Electric mobility, autonomous and highly automated fully networked driving vehicles are currently the prevailing topics of many developments. These vehicles are equipped with high-performance data processing platforms for sensors, computing, storage, actuators and networking and connectivity (Figure 4.1).

Figure 4.1: Data processing in autonomous and automated vehicles

Autonomous and automated vehicles count on a great deal of sensors, from ultrasound, video cameras, radar up to LIDAR. These sensors provide massive amounts of fused data about the environment around the vehicle. Since for example a video camera delivers valuable data at good daylight, radar or LIDAR senses at night or provides depth data. Depth data are indispensable to enable for instance the recognition of object differences between pedestrians and trees or traffic signs. In conclusion there are terabytes of data per vehicle which needs to be crunched through or up- and downloaded every day. The entire V2X system must collaborate cooperatively and interconnected, and each node requires a capability to process, store and connect significant amount of data. The emphasis is on data download as well as upload, since gathered data enables the vehicles, infrastructure components and the cloud to adapt and learn from previous experiences and environments.

Various sensor technologies such as radar, LIDAR, ultrasonic sound and video camera systems analyse the surrounding vehicle environment. These sensors produce several terabytes of data, which need to be stored and processed by high-performance computing platforms. The vehicle must be linked internally with other units and externally with other vehicles and with the infrastructure of various vehicle ecosystem stakeholders residing at data centers in the cloud. Many computing and communications technologies are instrumental in the digitalization of the vehicle ecosystem in particular for autonomous and automated driving.

The computing hardware architectures for SAE level 4 and 5 vehicles perform the pre-processing and the fusion of the gathered sensor data, the learning and decision making, the HD maps updates, the networking and connectivity and the vehicle actuation control. Several video cameras, radars, ultrasonic sound sensors and LIDAR’s deliver Gigabits per second and sensor fusion becomes the norm requiring fully programmable flexible and scalable power efficient computing platforms together with smart data storage systems for data logging and recording, maps and infotainment. Artificial intelligence and machine learning become a vehicle option running algorithms for object detection, path identification and planning requiring high performance computing platforms as well. The vehicle implementing V2X networking and connectivity becomes an enabler for mobility as a service with storage, communications and networking capabilities, data and signal processing ability and sensing. The vehicle perception and localization rely on the aggregation of various compute and communications intensive functions into one powerful computing platform.

The automated and autonomous driving vehicle software runs on top of hardware platforms with defined operation domains like perception, localization, vehicle behavior and control, networking and connectivity, infotainment and equivalent controllers. Software in vehicles expands dramatically in the ballpark of several hundreds of millions of lines of code. The increasing system complexity of automated and autonomous driving vehicles requires an increasing need for frequent, seamless and quick software updates impacting all vehicle ecosystem stakeholders. The trend from hardware to software defined systems challenges the way software gets updated and needs to be adapted to vehicle ecosystem needs. Firmware over-the-air and software on-the-air (FOTA, SOTA) become a necessity. Additional contests are software safety and security, the highly dynamic environment and efficient cloud based vehicle software management.

A vehicle of any SAE level are comprised of computing units for infotainment, with telematics and navigation, vehicle control and connectivity. Currently we see the development of today’s vehicle navigation, telematics and infotainment from standard navigation maps with TMC traffic data, GNSS positioning and annual map updates together with ADAS systems moving toward automated and autonomous driving with high reliable ADAS, HD maps including live data overlays and very accurate positioning and localization. These self-learning HD live maps get updated in real time and are cartographed by all automated and autonomous vehicles equipped with corresponding vehicle connectivity to the data centers in the cloud.

Networking and connectivity platforms get integrated with vehicle control platforms and many vehicle functions get connectivity for infotainment, telematics, navigation and vehicle control. Vehicle ecosystem cyber security and privacy are the major prerequisite to prevent attacks on functional safety and preclude hacks on ADAS features such as adaptive cruise control (ACC), pre-crash systems and automatic parking. Functional safety is not only about the vehicles; it includes traffic infrastructure, navigation and telematics and infotainment as well.

4.1Sensing

The average number of sensors in today’s vehicles is over 100 and growing significantly. Sensors and actuators are implemented for active and passive safety, convenience, infotainment, low emissions, energy efficiency and cost and weight reduction. Sensors and actors are networked and connected with engine control units (ECU) using several different in-vehicle networks. Vehicular sensors measure position, pressure, torque, exhaust temperature, angular rate, engine oil quality, flexible fuel composition, long-range distance, short-range distance, ambient gas concentrations, linear acceleration, exhaust oxygen, comfort/convenience factors, night vision, speed/ timing, mass air flow, and occupant safety/ security. Sensors and actors are in the powertrain and chassis control (engine, automatic transmission, hybrid control, steering, braking, suspension), body electronics (instrument panel, key, doors, windows, lighting, air bag, seat belts), infotainment (audio, video, speech, navigation, traffic message channel (TMC), electronic toll collection (ETC)) and other assistance systems like electronic stability control, pre-crash safety, park or lane assist.

There are currently in-vehicle sensors for temperature, pressure, power, flow, fill level, distance or angle. Temperature sensors are used for oil, cooling water, exhaust gas and the inside or outside of the vehicle. Pressure sensors are for oil, cooling water, tire pressure or hydraulic oil. Power sensors are implemented for airbags, belt tensioners and pre-tensioners, window lifters, brakes and others. Flow sensors are used for oil. Fill level sensors are deployed for gas, cooling water or hydraulic oil. Angle sensors are applied for ABS, ESP, steering, pedals, damper compression or chassis inclination. More specific sensors are knock sensors, acceleration and vibration, engine speed, velocity, lambda probe, linear plate, brightness, humidity, torque, magnetic field and air mass.

All vehicle sensors have rigorous real-time and safety requirements. For example, an anti-lock braking system (ABS) measures the vehicle speed and rotational speed of the vehicle’s wheels to detect skid. When skid is detected the pressure to the brake is released to stop the skid, but a permanent reduction in fault case has to be circumvented. Another instance is an airbag control (AC) system which monitors various vehicle sensors including accelerometers to detect a collision. If a collision is detected, the ignition of a gas generator propellant will be triggered to inflate a bag. The trigger for the ignition must be within 10 to 20 milliseconds after the collision. Here the functional safety requirements are even tougher than ABS.

To get to autonomous or automated vehicles, which shall perform human driving capabilities, it all starts with the perception of the vehicle surrounding environment by gathering huge amounts of data in real time by different sensors. It is essential for these vehicles to know their exact position, to estimate perfectly where to go safely next and to control extremely well how to get there. But there is more when it comes to vehicle environment awareness as kind of another essential data set. These additional data are coming from the cloud infrastructure and other vehicles. All this vehicle surround sensing must be highly robust and reliable in all use cases. Since sensors have got their specifics and work differently (see Table 4.1), there is no one sensor technology which fits all use cases.

Table 4.1: Set up for sensing

Vehicle environment sensor implementations currently include ultrasonic sound sensors, near, mid- and long-range video cameras, mid- and long-range radar or LIDAR. A sensor package for a SAE level 3 vehicle includes up to 12 ultrasonic sound sensors, up to 5 video cameras, up to 3 radar sensors and at least one LIDAR sensor (Figure 4.2). In the case of the Tesla it works with 4 to 8 surround video cameras to provide 360 degrees of visibility around the vehicle at up to 250 meters of range. 12 ultrasonic sound sensors complement the vision sensors, allowing for detection of both hard and soft objects at up to 2 meters distance. A forward-facing radar provides additional data about the environment on a redundant wavelength that is able to see through heavy rain, fog, dust and the vehicle ahead.

Figure 4.2: Vehicle sensors supporting ADAS use cases

An ultrasonic sound sensor sends out sound waves. When the sound waves hit an object, they produce echoes revealing the obstacle location. The sensor works for soft and hard objects being in a horizontal angle of up to 120 degrees for distances up to 5 meters. Vehicles implement ultrasonic sound sensors to detect obstacles like pedestrians, vehicles or posts in the immediate vicinity of the vehicle. For larger distances and more details, multiple video camera sensors deliver images of the vehicle’s surroundings. Rear view and near range video cameras deliver a detection range up to 15 meters with a horizontal aperture of around 130 degrees. A night vision video camera sensor looks up to 150 meters ahead with a horizontal aperture angle of around 30 degrees. And the front video and stereo-video cameras have a detection range of up to 80 meters with a horizontal aperture angle of around 40 degrees.

Video camera sensors detect colors, signs, characters, textures and are therefore capable of traffic sign, traffic light or lane marking detection. Stereo cameras and 3D cameras with depth image provide 3D vision making range determination possible. Furthermore, video camera sensors are used as well for central mirror and external side mirrors or to optically sense drivers in-vehicle. But cameras have got issues with increased distance up to 250 meters to enable anticipatory driving or providing reliable data on weather conditions such as fog, rain, snow, LED flicker of surrounding vehicles’ headlights or sun light glare effects.

Therefore, radio detection and ranging (radar) sensors with increased range, angular and elevation resolution, complementing video camera sensors, get added. Radar sensors provide good ranging ability and relative velocity measurement and operate under any weather conditions and in harsh environments including dust, dirt, light, rain, snow, and so on. Radar sensors transmit electromagnetic waves. If radar waves get reflected from objects, they will reveal the objects’ distance and velocity. Mid- and long-range radars are deployed around the vehicle to track the distance and speed of objects like vehicles, motorcycles, bikes and pedestrians in the neighborhood of the vehicle in real-time. Radars scan horizontally delivering 2D data. If a vertical scanning is added, you will get 3D data. The vehicle front and rear mid-range radar provides a detection range of up to 150 meters with a horizontal aperture angle of around 45 or 130 degrees. The vehicle long range radar range is up to 250 meters with a horizontal aperture angle of 30 degrees. To increase the radar sensor 3D performance even further, to track detected objects and classify them, the elevation, range, Doppler and angular resolution needs to be optimized for radar frequencies up to 79 GHz.

Tackling the high-resolution required for automated and autonomous driving, light detection and ranging (LIDAR) sensors scan the environment with a non-visible laser beam, which measures distances and produces a full 3D image of the vehicle’s surrounding. LIDAR sensors combines one, or more, lasers with a detector that senses the photons reflected from scanned objects, along with built-in data processors that measure the time of flight (ToF) to detect structure and motion in three dimensions. Whereas a single fixed laser performs simple ranging, advanced LIDAR systems use multiple lasers or a rotating system that scan much further and provide wider fields of view.

Together with sensor data fusion, LIDAR allows a high resolution vehicle location estimate, including the positions of surrounding vehicles, pedestrians and other objects. And flash LIDAR, where a single laser pulse illuminates all pixels per frame and the laser pulse return is focused through the lens onto a 3D focal plane array to do imaging through obscuration, provides an even significantly higher resolution vision around the entire vehicle.

Out of these sensor data a comprehensive real-time environment model including traffic participants (cars, pedestrians),the static environment (occupancy grid), a road model (geometry, conditions), traffic control data (speed limits, traffic lights) and precise map localization (landmarks), based on sensor data fusion of continuous, sensing ultrasonic sound, video cameras, radars and 3D flash LIDAR, along with the vehicles precise location, enables autonomous and automated driving of level 3 and above. In-vehicle road condition sensors like video camera, vehicle dynamics control (anti-lock braking system, traction control system, active yaw control), tire effects (slip, vibration), local vehicle weather (air temperature, rain intensity) and cloud data (digital weather maps, dynamic safety maps) get added and all data gets fused into a fusion model including environment, vehicles (dynamics model) and tires (friction estimation).This temporal model is then applied for example to ADAS use cases as shown in Table 4.2 to sense the road, curb and lanes to determine the center of the lane, lane edge, road curb and correct GNSS errors.

Table 4.2: Vehicle sensor data fusion for ADAS

The requirements on data throughput, latency and reliability of V2X networking and connectivity for sensor technology depend very strongly on the traffic use cases and in particular their very diverging dynamics. Because of the vehicle and traffic scenarios dynamics, the communications requirements depend among other parameters on the communication distance, vehicle speed and number and density of devices, which needs to get linked with each other. For example, use cases with the need for cooperative perception or cooperative decision making for lane or parking assist require a much higher number of traffic participants to get linked than use cases like adaptive cruise control or traffic sign recognition.

There are use cases where the exchange of raw, pre- or post-processed vehicle sensor data is needed, although it consumes much more networking and connectivity resources. For example, there are algorithms where the processing of data for the same objective is not the same on different vehicles. Different algorithms process the same raw data for a different purpose, e.g. lane keeping assist versus vehicle following. Advantages of the exchange of pre- or post-processed data instead of raw sensor data include lower bandwidth consumption and scalability.

Another strong demand for V2X networking and connectivity comes from the evolution of vehicle sensing to comprehensive 3D perception as required for SAE level 3 and above use cases. Insufficient communications and computing performance, high cost and poor precision have prevented 3D systems in many vehicle use cases. Now 3D perception technology is gaining momentum in more and more use cases through increased performance and high-resolution sensors. There are various technologies to gather three-dimensional distance data from a traffic scenario. Active solutions such as LIDAR or other time-of-flight sensors determine distance information. These solutions require a decent amount of computing performance and have almost no constraints on the traffic scenario structure. The resolution of current time-of-flight systems depends on the kind of sensor used and their use in real traffic scenarios relies on good viewing conditions. Passive solutions use image data taken by video cameras, similar to the distance perception of the human visual perception system. They offer good spatial resolution, but require high performance computing and suffer from not so good lighting conditions and high textured traffic environments. Data pattern projections and video camera stereo systems enable high spatial and depth resolutions.

4.2Computing

Depending on the type of vehicle, ranging from small, compact, intermediate, large sports utility and luxury vehicles up to vans, the number of hardware components varies between 10 and 150. Vehicles today contain around 100 electronic control units (ECU) or more when all optional features are chosen when buying a vehicle. An ECU is a computing platform comprised of one or more micro controller units (MCU) to control a certain system domain and vehicle function by using input sensors and actuators. There are ECUs for engine systems, transmission control, electric power steering (EPS), hybrid electric vehicle (HEV), break-by-wire systems, airbags, smart keyless, tire pressure monitoring system (TPMS), dashboard, adaptive front lighting systems (AFS), body control, doors or wipers. For instance, the engine system ECU processes data from vehicle sensors like crank position, an air flow meter, intake temperature and throttle sensors to control fuel injection volume, ignition timing and many vehicle actuators. The real-time behavior is determined by rotation speed and motor cycles, for example at 6000 rpm one cycle is 20 milliseconds. The real-time requirement for ignition is in 10 μs order, so the processing of sensor data for the fuel injection volume has to be finished within 10 μs.

Vehicle networking and connectivity between ECUs enable the sharing of data and avoid redundancy of sensors, actuators, storage and computing building blocks. The in-vehicle system architecture evolves from a central gateway architecture with up to 100 ECUs in one vehicle and one ECU per function and signal based communications to a domain controller architecture with dedicated and secure functionality domains and service oriented networking and connectivity. In the future we might see a centralized functional architecture with virtualized data processing and a networking and connectivity supporting computing domains (Figure 4.3).

Figure 4.3: From central gateway (left) to domain controller (middle) and functional architecture (right)

The recent computing platform advancement challenges vehicle manufacturers and the whole ecosystem. While vehicle product cycles spanning at least three years or more, vehicle manufacturers and suppliers struggle to keep up with the much faster evolution of computing platforms; for example, being used in smartphones whose life cycles are very often shorter than one year. And several computing platforms get already integrated into vehicles across IVI, driver assistance, automated and autonomous driving, and other functions and features. Doing so, vehicles become more and more software-defined, exploiting the computing platforms advancement, while dealing with the challenges of safety, security, and system costs at the same time.

In order to cover these challenges in the vehicle, functions get consolidated into a number of domain or area controllers that evolve from today’s complex architecture that is based upon a huge number of electronic control units (ECUs) distributed throughout the vehicle. Therefore, the vehicle ecosystem stakeholders look for a flexible and scalable vehicle computing platform architecture that can be configured to support various different functions by varying the number and configuration of domain or area controllers in the vehicle and the software that run on them.

The next generation of advanced driver assistance systems (ADAS) technology uses sensors which generate Gigabytes of data that need to be fused, analyzed and processed fast enough so that decisions made and vehicle actions are performed in real-time. The vehicle has to get environment awareness to adapt to never seen before evolving scenarios in less than a second based on huge amount of input data such as vehicle neighborhood objects, speed, road, or weather conditions. The vehicle must account for the unpredictable behavior of other vehicles, pedestrians and other objects while steadily communicating.

There are different approaches for computing platforms targeting ADAS and autonomous and automated driving use cases. For example, ADAS use cases like adaptive cruise control (ACC) or lane keep assist (LKA) are implemented as a distributed data processing architecture where dedicated ECUs process sensor data specific for one ADAS function creating latency and connectivity challenges for the central computing unit for sensor data fusion. Another more scalable and flexible option is a central unit which is capable of raw sensor data processing of every sensor as well as of doing sensor data fusion without latency to support ADAS use cases.

One approach for automated and autonomous driving depends on pure computing performance to deal with the overwhelming amount of sensor data. In this approach for instance video cameras, radar and LIDAR data are fused with high-definition maps (HDM), data to estimate the vehicle location. Another approach gathers only as much sensor data as necessary to keep the amount of data manageable. It gathers video camera and radar data to recognize outlines and textures of the environment. Then it applies for example triangulation to determine the position of the vehicle between two precisely measured waypoints.

There are options to build the computing platform on a GPU, communications or server centric platform in implementing SAE level 3 or above for automated and autonomous-driving. The computing platform needs the processing performance to process data from quite a lot of video cameras, radar and ultrasonic sound sensors as well as the LIDAR sensor and is dedicated to supporting neural-network inferencing as a deep-learning accelerator delivering trillions of operations per second. The data processing on the platform supports all the required functions for use cases of SAE level 3 and in the future above.

And another basic data processing approach is rapidly changing. Plenty of data get uploaded to data centers in the cloud, which apply deep artificial intelligence, deep and machine learning algorithms to them. The resulting instructions and rules are then transferred back to the vehicles, telling them what’s what in their neighborhood and what the vehicles should deal with. Vehicles start to improve their recognition rates of surrounding objects, for example everything from upcoming stoplight, chatting pedestrians, or a recent collision ball running into the road. There are many data-hungry processing, storage and communications steps along with the many vehicles networking and connectivity links.

Autonomous and automated driving is more than simply gathering and processing huge amounts of sensor data and fusing them in an environment model and feed them into vehicle system domain ECUs. Every vehicle has a need of a superior understanding and full awareness of every feasible traffic situation, which enables the vehicle control system to decide in any quickly changing circumstances. Deep and machine learning and artificial intelligence technology might have the potential to deliver and call for high performance server computing for vehicles, the cloud and a high bandwidth communications interfaces between both.

Deep neural networks (DNN) process sensor data through successive layers, while each layer applies multiple linear and non-linear operations. DNN training uses offline huge demonstrative data sets, whereas the applied trained system then processes the sensor data in real-time. The training vehicles implement a shadow mode, where the vehicle runs the computing platform with actual sensor inputs and records the system outputs together with driver outputs to gather and transfer huge amounts of data. All use cases get captured and the DNN trained offline and validated afterward in the cloud. When the new functions are proven they get enabled via FOTA and OTA software update.

For comparison, convolutional neural networks (CNN) implement several convolutional layers and a lower number of fully-connected layers. The number of convolutional layers depends on use cases and is different with up to 250 layers. CNN uses a special architecture containing feature maps or convolutions where this architecture is adapted to recognize in particular images. Recurrent neural networks (RNN) are used to process sensor data which are changing over time for example for motion planning. A RNN samples therefore data collections over time.

4.3Communications

There are two implementation areas where V2X communications has got an impact, intra-vehicle networking and connectivity and outside vehicle networking and connectivity. There is a steadily rising request for data connectivity inside vehicles. The requirements of in-vehicle networking and connectivity are very diverse according to their application area and use cases. Thus, vehicle manufacturers apply networking technologies like the controller area network (CAN), the local interconnect network (LIN) and FlexRay to connect electronic control units (ECUs) with each other.

CAN is currently the de-facto standard protocol for in-vehicle networking with several Mbps bandwidth and non-deterministic behavior under high load. It is widely used in the basic trunk network, in the powertrain and body systems. The current flexible data-rate protocol (CAN FD) delivers communications up to 5Mbit/s and with partial networking (CAN PN) at an improved energy efficiency. LIN is a single-master in-vehicle networking bus protocol vehicle for body applications with 19.2 kbps and a UART interface that is used in switch input and sensor input actuator control. FlexRay is a high-speed, high-performance communications protocol up to 10 Mbps, with flexibility, reliability and security which is mainly used for X-by-wire, ADAS, and high performance applications. And the media oriented systems transport protocol (MOST) is designed for multimedia networking using optical fibre with bandwidth up to 150 Mbps.

For example, the chassis network applies CAN or FlexRay for high data rate, guaranteed response time and high reliability whereas the body electronics network implements CAN or LIN to link a large number of ECUs at moderate data rates and reliability but with low power consumption. The infotainment network applies MOST for multimedia high data throughput. In-vehicle networking and connectivity next-generation technologies include options such as mobile high-definition link (MHL), high-definition multimedia and control over a single cable (HDBaseT), Wi-Fi, near field communications (NFC) and Ethernet which is mainly used for diagnostics today but has high potential for more.

The intra-vehicle and outside vehicle communications technology evolves in stages. In the first stage ECUs get applied independently to various vehicle components like the engine, brakes, steering, etc. and the in-vehicle network is not used. In the next stage each ECU exchanges data for improving the efficiency of the control system and each system operates almost independently. In this stage the timing constraints on vehicle networking and connectivity are loose. With integrated systems in the next stage, each system still operates autonomously, but some applications are provided with multiple ECUs connected with in-vehicle networks. For functional safety the mechanical backup system is still present and the basic functions of a vehicle are well-maintained even if the electronics system fails. In the next stage V2X networking and connectivity are intensively implemented and the mechanical systems get exchanged with ECUs and networks.

The wireless V2X networking and connectivity technology has to fulfill the often contradictory requirements of all vehicular use cases with their extreme complexity of network topologies, mobility, environment dynamics and technology heterogeneity. It has to support unidirectional broadcast and geocast as well as direct communications peer-to-peer in ad-hoc or direct mode with always available coverage.

Wireless V2X networking and connectivity technologies are expected to provide traffic and transport improvements for safety and traffic efficiency applications. Using ad-hoc wireless communications, a variety of data are exchanged between vehicles or with the vehicle infrastructure. The major wireless technologies for V2X networking and connectivity are the wireless access for vehicular environments (WAVE), SAE J2735, ITS-G5 and 3GPP at present. Wireless access for vehicular environments (WAVE) is an approved amendment to the IEEE 802.11 standard also known as IEEE 802.11p. WAVE makes sure the traffic data collection and transmission are immediate and stable, and keeps the data security applying IEEE 1609.2. IEEE 1609.3 and defines the WAVE connection setup and management. The communication between vehicles (V2V) or between the vehicles and the roadside infrastructure (V2I) is specified for spectrum in the band of 5.9 GHz between 5.85–5.925 GHz.

The spectrum allocation for WAVE started with the request to allocate 57 MHz of spectrum in this 5.9 GHz band for intelligent transportation systems (ITS) in the United States in 1997. In October 1999, the FCC allocated the 5.9 GHz band for DSRC-based ITS applications and adopted basic technical rules for DSRC operations. The Federal Highway Administration (FHWA), an agency of the USDOT, developed a national, interoperable standard for dedicated short range communications (DSRC) equipment operating in the 5.9 GHz band until 2001. IEEE 802.11p has been tested in many projects and large number of vehicle ecosystem stakeholders at may sites worldwide.

Another wireless technology that is commonly known in vehicular communications and, in particular, V2V networking and connectivity is the J2735 DSRC message set dictionary, maintained by the society of automotive engineers (SAE). This standard specifies a message set, the data frames, and data elements specifically for applications anticipated to use DSRC/ WAVE communications systems. The message set is comprised of 15 messages, 72 data frames, 146 data elements and 11 external data entries. Message types are basic safety, a la carte, emergency vehicle alerts, generic transfer, probe vehicle data and common safety requests. In Europe, ETSI defined ITS-G5, which incorporates some of specifications from WAVE.

Wireless communications technologies complement on-board vehicle sensors (Figure 4.4) and increase the vehicles’ perception beyond line of sight (eHorizon). Cooperative awareness messages (CAMs) are continuously sent out by all vehicles in a specific region. These messages contain the transmitter’s location, speed, and direction of travel and allow for use cases like intersection collision avoidance (ICA) to warn drivers if other vehicles are detected that are on a collision course. Signal phase and timing (SPaT) messages sent out by traffic lights enable the green light optimal speed advisory (GLOSA) use case, where drivers can adapt their speed. Event-based decentralized environmental notification messages (DENMs) are forwarded over several hops and are used to warn vehicles of hazardous situations such as the end of a traffic jam on the highway.

Improvements of 3GPP technologies of the already standardized LTE-Direct standard (ProSe, 3GPP release 12) lead to the V2X specifications in 3GPP release 14 and address safety critical and non-safety applications in common activities with standardization bodies ETSI ITS WG1 and SAE DSRC. There is an objective that the LTE V2V solution should be able to use V2X messaging protocols like SAE J2735 and cooperative awareness messages (CAM), decentralized environmental notification messages (DENM), signal phase and timing (SPAT), map data (MAP) and IVI.

Depending on the considered use-cases, distinct requirements come into play. Applications for in-vehicle infotainment (IVI) require high bandwidth and network capacity, active road safety relies on delay- and outage-critical data transmission, whereas data exchange for road traffic efficiency management typically comes without strict quality of service (QoS) requirements and exhibits graceful degradation of performance with increasing latency.

3GPP considers link types for V2X networking and connectivity use cases such as vehicle to vehicle (V2V) which it treats with an assisted inter-vehicle data exchange, vehicle to network (V2N) which delivers connectivity to the cloud, vehicle to infrastructure (V2I) which covers wireless links to RSUs and vehicle to pedestrian (V2P) which supports data exchange with other devices in the proximity of vehicles e.g. pedestrians and cyclists. Dual connectivity in dense heterogeneous networks, LTE-based broadcast services and enhanced multimedia broadcast and multicast service (eMBMS) and proximity services including D2D communications are used to implement LTE based V2X networking and connectivity.

V2X networking and connectivity is capable of enabling many use cases. But unless the functional safety challenge is not solved, all automated and autonomous driving use cases are out of scope, and the use of V2X is mainly for mobility as a service, convenience or infotainment related use cases. These use cases do not rely on embedded connectivity and therefore are possible with smartphone mirroring and tethered connectivity to get vehicle ecosystem stakeholders linked.

Figure 4.4: V2X networking and connectivity data rates by sensors

Latency rises with higher numbers of vehicles in a network cell. For instance, it has been shown (Phan, Rembarz, & Sories, A Capacity Analysis for the Transmission of Event and Cooperative Awareness Messages in LTE Networks, October 2011) that with LTE technology, the capacity limit for distributing event-triggered messages to all devices in the same cell can reach up to 150 devices in urban scenarios and about 100 devices in rural scenarios, maintaining an end-to-end delay below 200 milliseconds.

Every data packet (e.g., between two nearby vehicles) must traverse the infrastructure, involving one uplink (UL) and one downlink (DL) transmission, which may be suboptimal compared to a single radio transmission along the direct path between source and destination nodes, possibly enjoying much lower delay (especially in an overloaded cell). In addition to being a potential traffic bottleneck, the infrastructure may become a single point of failure e.g., in case of eNB failure.

3GPP LTE is originally designed for broadband traffic and is not optimal for transmitting small amounts of data for any V2X use case. This turns into suboptimal usage of radio resources and spectrum. However, many V2X use cases require support for a large number of very small-sized packets. This leads to potential issues within the current cellular designs, which are, for example, channel coding, radio resource management or control and channel estimation. In particular, commonly implemented control and channel estimation quickly becomes very inefficient for very short payloads.

Another issue for many vehicle use cases is the protocol delay until a payload can be transmitted. The common random access procedure in LTE is a multi-stage protocol with several messages in both uplink and downlink. Even a simplified implementation of the existing LTE access requires at least one preamble transmission and one downlink feedback preceding the payload transmission due to required uplink synchronization. The set of at most 64 preambles per sub-frame are shared among all devices in a cell, regardless of their applications. The preambles are utilized for initial access, re-synchronization for data transmissions, handover, and radio link failure recovery. Therefore, the preamble scheme has many constraints with respect to delay and Doppler spreads, which limit the spectral efficiency of the physical random access channel (PRACH) and does not scale with the number of devices. In addition, an increasing number of random access responses, which are 56 bits per device, restricts the overall downlink capacity further.

Another drawback of the wireless cellular LTE infrastructure-based approach is that it is not available out of coverage and does not satisfy the stringent functional safety needs of vehicles therefore. This implies that additional infrastructure or specific standard extensions must get deployed if coverage is to be guaranteed.

Figure 4.5: V2X operation only based on Uu interface
Figure 4.6: V2X operation only based on PC5 interface
Figure 4.7: V2X operation using both PC5 and Uu interface
Figure 4.8: V2X operation using both PC5 and Uu interface

Table 4.3: Key performance indicators for V2X networking and connectivity

4.4Software

An automated and autonomous vehicle system is a very complex hardware and software system. The amount and complexity of software in vehicles increases, and the number of lines of code grows further, into the hundreds of millions of lines of code ballpark, controlling hundreds of electronic control units (ECU). More functions, more interfaces, more V2X networking and connectivity, more sensors and actors, more human-machine-interfaces and more diversity in the supply chain create huge challenges. And the very different software development and innovation cycles of the vehicle and the computing and communications stakeholders must be aligned. Additionally, the software has to comply with functional safety and must be maintained very efficiently.

There is a rush goin on, lead by major computing and communications stakeholders in collaboration with vehicle manufacturers, to develop and to mature the autonomous and automated driving hardware and software stack. SAE level 4 software development is under way for several platforms and vehicles, test driving in urban and rural environments. Part of the work is in support of computing platforms with field programmable gate arrays (FPGA), GPU, memory, sensors, recurrent neural networks (RNN) and convolutional neural networks (CNN) providing localization and planning with data path processing, decision and behavior with motion, behavior modules and arbitration, control with the lockstep processor, safety monitors, fail safe fallback and X-by-wire controllers as well as V2X networking and connectivity.

The automotive open system architecture (AUTOSAR) is a consortium composed mainly of vehicle manufacturers and electrical equipment suppliers with the objective of developing a common-use automotive software and electronic architecture by standardization of software platforms and development processes, relating to in-vehicle networking such as CAN, LIN and FlexRay. The current AUTOSAR software architecture for electronic control units which is primarily developed for vehicle control functions gets developed further to support more real-time data processing and networking and connectivity required for autonomous and automated driving.

AUTOSAR work driven by vehicle manufacturers puts emphasis on the software architecture, the software development methodology and the application interfaces to support software for different functional domains containing ADAS, vehicle control and FlexRay, CAN or Ethernet. The runtime environment (RTE) servers as software abstraction layer between hardware independent application software from vehicle computing platforms and architecture dependent software (Figure 4.9). The corresponding software modules are independent and are used flexibly in relationship with the application interfaces which are available for the vehicle’s body, interior and comfort, power train, chassis and passenger and pedestrian protection. ADAS use cases are typical examples of driver assistance applications.

Figure 4.9: AUTOSAR software architecture

AUTOSAR supports very different vehicle system configurations and network topologies. There are communications stacks for vehicle onboard units and RSUs for CAN, FlexRay or Ethernet and there need to be stacks for IEEE 802.11p / ETSI G5 or LTE and 5G as well. Since AUTOSAR defines a development methodology and software infrastructure, partial aspects of part 6 of ISO 26262, which encloses the requirements for the development of safety-related software are implemented.

Whereas AUTOSAR is clearly targeting vehicle control and ADAS, there is another software activity that has software specifications and standards in the vehicle ecosystem pursuing the development of infotainment applications. GENIVI focuses on middleware providing a software toolbox for a vehicle IVI platform. Function domains, are for example, software management (e.g. SOTA), networks (CAN, Flexray, USB, Wi-Fi, NFC, and Bluetooth, etc.), navigation and location based services, and telephony. Regarding V2X networking and connectivity, there are currently projects in the area of smart device links (a set of protocols and messages that connect applications on a smartphone to a vehicle head unit) and remote vehicle interaction (to provide robust and secure communications between a vehicle and the rest of the world).

Another important software topic strongly related to V2X networking and connectivity is the over-the-air (OTA) firmware and software update for vehicles. The major challenges are: the rising demand for connected vehicle devices, changing government regulations regarding safety and cyber security of the vehicle, and increasing request for advanced navigation, telematics and infotainment. On top of that, vehicle manufacturers are urged to protect vehicle data from remote hacking and malfunctioning, which in turn rises the call for OTA software updates for vehicles as well.

4.5HAD maps

We are all aware today, that safe, reliable and trustworthy vehicle navigation systems depend heavily on dynamically updated high definition maps. We define a high definition map as in-vehicle data base comprised of a multi-layer data stack where the layers are interlinked. The objective of this dynamic HD map is to provide the localization, to sense vehicle surroundings, to perceive and fuse sensor data, to support reasoning and decision making, as well as to provide input for motion control. The layer 1 static map is comprised of the basic digital cartographic, topological and road facilities data. The quasi-static layer 2 includes planned and forecasted traffic regulations, road work and the weather forecast. The dynamic layer 3 encompasses traffic data including accidents, congestion and local weather. And the highly-dynamic layer 4 is comprised of the path data, surrounding vehicle and pedestrian data and timing of traffic signals.

Maps for navigation and path planning are typically for human consumption. They get updated at least once a year and are cartographed by special surveillance vehicles with human editing. The online vehicle navigation is done by the driver in real-time and is supported by an upfront offline route planning using maps. Evolving from navigation toward automated and autonomous driving, the maps for vehicles are generally for computing and are up to date and cartographed for example by any regular vehicles on the road with automated data processing. Vehicle sensors and vehicle data fuel automatically the map update together with V2X networking and computing. Then, the vehicle navigation gets tightly combined with the strategic, tactical and reactive vehicle path planning with very different time and location constraints.

Highly automated driving (HAD) maps include highly detailed inventories of all stationary physical assets related to streets such as lanes, road edges, shoulders, dividers, traffic signals, signage, paint markings, poles, and all other critical data needed for the direction finding on roadways and intersections by automated and autonomous vehicles. HAD map features are highly accurate (in the cm absolute ranges) in location and time and get updated in real-time via cloud and crowd sourcing. Dynamic live HAD maps deliver data beyond the line of sight for electronic horizon (dynamic eHorizon) predictive awareness to let autonomous vehicles know what lies ahead. The vehicle looks for example beyond 300 m and around the corner with the HAD map model provided.

The path planning (Figure 4.10) for example the vehicle plans the next highway exit, and is not real-time. It delivers the route to a selected destination for an automated or autonomous drive. The appropriateness of street segments are considered for the route calculation using an extended navigation map. In the path planning on lane level, for instance the vehicle has to take the right turn at the intersection, is near real-time. It produces lane change advices according to the vehicle’s driving lane and lane traffic situation for upcoming manoeuvres along the vehicle route. This planning uses the HAD map with detailed lane data and lane accurate positioning. Finally, there is the path planning at the lane and geometry level (e.g. the vehicle must avoid any accident) which is strictly real-time. It computes the automated or autonomous vehicle trajectories based on lane change advice and considers surrounding traffic and senses surrounding environment characteristics. This planning uses sensor and environment object data from video camera, ultrasonic sound, and radar and LIDAR sensors.

Figure 4.10: Multi-stage control for vehicle path planning

The vehicle with V2X the HAD map data get continuously updated with data from other vehicles, pedestrians and other ecosystem stakeholders. The vehicle acts as another sensor with adaptable range and resolution. The sheer amount of data as well as communication performance required and their scalability for instance between urban and rural scenarios is currently an issue. The implementation of deep learning and artificial intelligence requires the application of a continuous feedback loop comprised of data gathering (shadow mode), DNN training, inference and SOTA update. During shadow mode, the vehicle records large amounts of vehicle data and uploads them to servers in the cloud. New inference models, fail safe and fall back instructions, firmware and other software updates, happen with SOTA.

ADAS use cases which benefit from the electronic horizon and HAD map system architecture (Figure 4.11) for fully automated and autonomous driving vehicles SAE level 4 uses cases like: traffic jam assist, highway pilot or automated valet parking. SAE level 3 uses cases for active driver assistant systems are: adaptive cruise control, forward collision warning or lane keep assist. Lower SAE level use cases for efficient driving or passive driver assistant systems are obviously supported as well. Examples are eco drive assist, range control assist, intersection warning, traffic sign assist, curve speed warning, predictive curve light or night vision.

Figure 4.11: HAD architecture with electronic horizon

The electronic horizon provides other vehicle ECUs a continuous prediction of the upcoming street network using standardized data exchange protocols between navigation and telematics and HAD maps. It integrates map matched localization, most probable path estimation, static map attributes including curvature, slopes, speed limits, road class, etc. and dynamic data as of route, traffic data, hazard warnings, road construction data, weather and so on. A perception layer, aims to detect the conditions of the environment surrounding the vehicle, for instance, by identifying the appropriate lane and the presence of obstacles on the track. A reference generation layer, which is based on the inputs from the perception layer, provides the reference signals as kind of a reference trajectory to be followed by the vehicle. And a control layer defines the commands required for ensuring the tracking performance of the reference trajectory. These commands are usually expressed in terms of reference steering angles and traction or braking torques and are sent to the vehicle control ECUs.

The advancing map-enhanced driver assistance systems (ADASIS) consortium uses the standard protocol for exchanging electronic horizon. The electronic horizon reconstruction ensures electronic horizon protocol integrity as it runs on an ECUs domain controller and is therefore subject to ASIL allocations and functional safety. ADASIS version 2 is comprised of the standard and ADAS map data for advanced driver assistance use cases and is designed for data broadcast on the CAN bus. The most probable path and tree length is up to 8190 meters with an attribute resolution of meters. ADASIS version 3 consists of HAD map data for automated and autonomous driving up to SAE level 5. It supports broadband communication as of Ethernet and TCP/IP, with bi-directional communication support for example in a P2P mode. The most probable path and tree length is up to 43000 kilometers with an attribute resolution of centimeters. This enables automated and autonomous driving HAD features like an extended lane model (road and lane geometry, road and lane width, lane marking, lane connectivity), highly accurate junction model (lane merges and markings, splits, stop lines), a speed profile related to specific road and lane segments (real-time speed profiles, speed profile histories), vehicle within surrounding (unique ID, position on the link and lane path, speed, vehicle status) and parking area models (geometry).

4.6Functional safety

Automated and autonomous driving vehicles will never make it out of test trials unless issues regarding privacy, functional safety, and security have been addressed. The term privacy is used here as the person’s freedom from interference or intrusion by others and in particular the ability to determine what data about him or her are shared. We use the term functional safety for the correct system functioning of the vehicle and the protection of the driver and passengers in the vehicle, mainly to ensure avoidance of vehicle or traffic accidents. The term security refers to protective digital privacy measures where the integrity, confidentiality, and availability of an individuals’ data are guaranteed.

Functional safety is one of the hottest topics as semiconductor manufacturer’s wireless solutions for automated and autonomous vehicles become available now. Automotive systems must be compliant to ISO 26262 and its associated automotive safety integrity levels (ASILs) B through D. ASIL B nominal safety use cases are for example lane assist, park assist, speedometer, rear camera or the ones where it’s sufficient to make the driver aware if the system is not working. ASIL D relates to critical safety use cases like braking, steering, acceleration, chassis control, air bag, seat belt tension or the ones where the driver relies on the systems to function correctly all the time. Standards like IEC 61508 cover industrial functional safety and are important to look at as well, since it applies to electrical, electronic and programmable electronic systems. These standards mandate robust error detection and mitigation to ensure a vehicle does not run into a risk of hazards caused by system malfunction and continues to operate safely even after a component failure.

ISO 26262 must be implemented from intellectual property design to system implementation (accompanied by comprehensive documentation for the applicable requirements). The vehicle systems must be capable of dealing with systematic faults (hardware errata, software bugs, incorrect specifications, incomplete requirements) as well as random faults (hardware failures, memory errors, permanent, transient or latent errors). There are different options to do so. First there is the option to implement diverse systems and choose one out of them (dual, triple units etc., different implementations, random and systematic choice). Second redundant hardware blocks are used by adding checks and third, there is redundant execution running functions multiple times and double checking the results.

Users expect privacy and security in their cars. Consequently, the collection, processing, and linking of data have to be in accordance to the laws of privacy. At present, a lot of personalized data is already collected via navigation systems, smartphones, or during vehicle maintenance. Automated vehicles are capable of recording and providing large amounts of data that might assist crash investigations and accident reconstructions. Such data is of high relevance for improving active safety systems and system reliability but also for resolving liability issues. Existing accident data bases such as the GIDAS project (German in-depth accident study) are updated and extended continuously by automated driving information (SAE level classification of involved vehicles, driver/ automated mode etc.).

Furthermore, cyber security as a vulnerability to hacking has to be considered in order to avoid that the vehicle or driver lose control over the vehicle due to intrusion and theft, malicious hacking or unauthorized updates for instance. Vehicles offer multiple vulnerabilities for attacks in particular with the implementation of wireless networking and connectivity. Attacks can happen to the IVI, the navigation and telematics, the vehicle control, the connectivity or the ADAS system. These attacks can happen externally via wireless (Wi-Fi, Bluetooth, NFC, RKE, LTE, 5G) links or from inside the vehicle via IVI and on-board diagnostics interfaces.

Probable attack points are: the remote anti-theft system, tire pressure monitoring system (TPMS), remote keyless entry and start, Bluetooth, the radio data system, the navigation and telematics and connectivity system. Some possible defences against attacks are the minimization of attack points, communications protocol message injection mitigation, communication protocol message cryptography, vehicle network architecture changes or clock based intrusion detection. In addition to the cyber-attacks into the vehicle, there are also attacks against the supporting cloud infrastructure. If the vehicle ecosystem infrastructure gets compromised, the interfaces to the vehicle will get abused. Cloud based wireless connected vehicle services are likely to become most attractive targets for hacking. In particular data on vehicle capabilities, status, location, route and driver and passenger’s data need to be protected from cyber-threats.

Vehicle manufacturers and their suppliers are in agreement that V2I and V2V networking and connectivity protocols have to be developed with security embedded along the entire development phase. That means for example that all units connected to the vehicle shall be protected byensuring the units and communications are intrinsically secure, incorporating secure coding and encrypted communications and data privacy is safeguarded, not only by normative expectations such as by law, but already on the vehicle system level. For instance, each connected vehicle unit can only communicate data to the units that are absolutely relevant for its functionality. Furthermore, access is only granted by the units having the corresponding access rights.

One of the most challenging issues are attacks on the software, where hackers exploit errors like software bugs, configuration or specification errors. Plentiful new errors are reported every year in all operating systems including Linux, Windows and others. Operating systems and communications drivers and applications cannot be directly secured and need to be sandboxed in some way. Security by design is a must and is achieved by using a formally proven kernel for protecting the entry points like the IVI, navigation and telematics, vehicle control or connectivity system.

Finally, let’s have a look at a security technology which might have the capability to disrupt the way security is implemented in connected vehicles, blockchain. But not only that, blockchain could disrupt how vehicles are made, how they are used and how they are maintained. A blockchain is considered as a register of distributed records in batches or blocks. These blocks are securely linked together in a virtual space where each valid block is linked with a time-stamp transaction and linked with the previous block. In the implementation of a secure telematics platform using a blockchain, the platform is capable of authenticating telematics data from the vehicle and the access along with authorization of all vehicle ecosystem stakeholders linked with it. Block chain driven crypto-currency may possibly empower vehicles with a repository which could facilitate the purchase of mobile services like updates, parking fees, toll fees and so on. Blockchain gives the vehicle a digital identity and keeps drivers’ and passenger’s privacy at the same time.

4.7Conclusions

There are sensor, computing, communications, software and high-definition maps technologies which have an impact on V2X networking and connectivity. To achieve fully automated and autonomous self-driving vehicle capabilities, SAE level 4 or5, vehicle manufactures must equip vehicles with an array of sensors that can identify pedestrians, road markings, traffic signs, other vehicles, and miscellaneous objects both day and night as well as under all weather conditions that human drivers are typically able to navigate. Generating 3D environment models and range estimation using only video cameras is possible, but requires more complex processing of synchronized stereo images calculating structure from motion. LIDAR is more precise and offers edge detection, which increases processing efficiency by reducing the output data to only regions of interest (ROIs). Combined, the different views of a vehicle’s environment made by ultrasonic sound, video cameras, radar and LIDAR, enable range and resolution far exceeding those of human vision. The sensor outputs get integrated into a three dimensional 360-degree environmental model running advanced sensor fusion algorithms on high-performance automated and autonomous driving computing platforms.

Ultrasonic sound, video, radar or LIDAR sensor technology doesn’t rely on V2X networking and connectivity for their implementation in vehicles. On the other hand, assuming functional safety gets achieved for V2X networking and connectivity there is a feasibility of high throughput together with bounded latency and high reliability for vehicle sensors in certain use cases. The sharing of full raw sensor data beyond nearest vehicle neighbors in high dynamic traffic scenarios is not feasible, but also most likely not needed since it includes too much data to be sent and contains highly dynamic data which quickly becomes irrelevant with distance. There is the option to use 5G communications technology for the exchange of full sensor data and sharing of full 3D images requiring a very high data throughput. Then the requirements on V2X networking and connectivity are qualitatively set to strictly bounded delay, maximum reliability inside delay bounds and achievable throughput. If it is necessary to ensure high sensor data rates with simultaneously high spatial 3D resolutions, sensor networking and connectivity becomes challenging.

The evolution of powerful central computing platforms for autonomous and automated driving is required due to the sheer amount of data to be processed and the scalability and flexibility to be implemented. The development of computing platforms starts from decentralized ECUs, goes over to central ECUs and evolves toward to computing platforms with server domains and virtualized ECUs.

It is very challenging to develop a wireless V2X networking and connectivity technology able to cope with the extreme complexity of a vehicular network, in terms of mobility, environment dynamics, technology heterogeneity, and to fulfil the often-contradictory requirements of all vehicular use cases. The foremost challenges of V2X are the high vehicle mobility and the high variability of the surroundings in which vehicles run. A huge amount of different configurations are possible, ranging from highways with relative inter-vehicle velocity of up to 300 km/ h and a comparatively low spatial density, to urban city crossings where relative inter-vehicle velocity is on the order of lower tens of km/ h and spatial density is extremely high.

A connected vehicle with V2X networking and connectivity is expected to have all the functionalities that one expects from their smartphones. But V2X networking and connectivity is not synonymous to autonomous or automated driving. An autonomous or automated vehicle does not require per se V2X. But we do think that V2X is a sensor extension for autonomous and automated vehicles to improve situation awareness, to provide redundancy when sensors fail, to update in vehicle databases and firmware and software over-the-air (FOTA, SOTA) and to enable, for example, telemetry. V2X networking and connectivity helps to improve situational awareness by providing redundancy if sensors fail, resolving traffic bottlenecks and reducing road congestion. Autonomous vehicles have many sensors. They are sufficient alone to make the vehicle moving around and V2X communications is not needed.

The V2X networking and connectivity core standards for DSRC are SAE J2736 and IEEE 1609 and 802.11 in the United States. In Europe these are ETSI TS 103 175, 102 687, 102 724, 102 941, 103 097, 102 539, EN 302 663, and 302 636 and for the CEN/ISO TS 19 321 and 19091, which were published until 2014. Regarding automated and autonomous driving, both C-ITS and DSRC have to be extended. What is lacking currently is a functional safety concept with fail-safe functions.

The vehicle software focus is on firmware now. V2X networking and connectivity introduces new challenges to software like software applications (apps), real-time changes over the air and multiple source software. The broadly implemented AUTOSAR and GENIVI compliant V2X stacks need to be extended for upcoming V2X networking and connectivity standards like LTE and 5G. And V2X networking and connectivity becomes another component of a mobile firmware-over-the-air (FOTA) or software-over-the-air (SOTA) system, including to the packager, the vehicle inventory, and the distribution server in the cloud, the vehicle OTA client and the protocol and reporting.

In the HAD map use cases, vehicle navigation system supports the automated or autonomous driving with geo fencing, calculation of HAD routes, generation of HAD manoeuvre advice, delivery of smart safe state locations and synchronizes the navigation user interface with the HAD map status. The electronic horizon of HAD maps connect the worlds of navigation and telematics with highly automated and autonomous driving. Challenges are ahead with the collection and exchange of data, the storage and processing of these data and the creation of precise up-to-date real-time HAD maps. In the case of live dynamic HAD maps the required V2X data throughput per vehicle in uplink and downlink depends on the specific use case. High data throughput use cases, for instance, are related to HAD maps with occupancy grid and full LIDAR, video camera or radar sensor images. Moderate data throughput use cases are the sharing of planned trajectories and high level coarse traveling decisions. Low data throughput applications are short emergency messages, short messages to coordinate manoeuvres and the periodic broadcast of vehicle status messages.

Mobile security architectures have converged for many years toward a security architecture based on three pillars. First, securing the elements or hardware coprocessors are implemented for the root of trust, cryptography, and transactions. Second, trusted execution environments (TEE) and secure operating systems ensure a well determined environment to run applications and services. And third, there are software hypervisors. TEE and hypervisors need to be significantly reinforced for automated and autonomous vehicles.

But data privacy and security are specific for the vehicle ecosystem since vulnerabilities have to be prevented in any case under economic constraints of implementation. And the combination of vehicle ADAS and control functions with wireless networking and connectivity requires dedicated efforts for providing functional safety. Functional safety for security and privacy standards must be developed for each level of autonomous and automated driving, including in-vehicle computing and communication, inter-vehicle communication, infrastructure and vehicle-infrastructure communication.

ISO 26262 is a must have; in particular for autonomous and automated driving. Vehicle ecosystem hardware and software platforms must inherently deliver security and privacy. It becomes clear that the standard for vehicle functional safety, ISO 26262, needs extensions for connected automated or autonomously driving vehicles. Taking the increasing complexity of automated and autonomous driving vehicles including vehicle networking and connectivity into account, a profound and growing focus on functional safety and security is an absolute inevitability. The challenges and constraints of wireless technologies to be implemented, as well as the potential threats in the vehicle ecosystem must be understood and assessed early and all over the wireless networking and connectivity development life-cycle.

The evolution of V2X technologies should look and feel familiar from its introduction and require virtually no new learning for the drivers, the passengers, and the other autonomous and automated vehicle ecosystem stakeholders. We think that mirroring established user interface conventions, building on existing user behaviors and incorporating well-liked smartphone apps and designs are going to be a successful way forward. V2X networking and connectivity has to contribute to stress reduction and to make drivers and passengers feel safe by building trust. So V2X technologies shall deliver only the right data at the right time since when driving less is more. V2X must fit into a simple and consistent visual look and feel, supporting multi-modal cues for individual preferences. And finally, V2X technologies are going to allow users to participate in and shape the predictions vehicles make on their behalf, with challenges to make privacy settings obvious and easy to use, give drivers and passengers more control up front when on boarding new features, and to quickly take back control in an easy way.

The key performance indicators for V2X networking and connectivity are reliability, data throughput, and security, but functional safety becomes a key challenge for the integration of sensor, computing, communication, software and high-definition maps technologies due to the specifics of wireless communication technologies.

References

Altintas, Onur (Ed.) (2016): 2016 IEEE Vehicular Networking Conference (VNC). 8-10 Dec. 2016. Institute of Electrical and Electronics Engineers; IEEE Vehicular Networking Conference; VNC. Piscataway, NJ: IEEE. Available online at http://ieeexplore.ieee.org/servlet/opac?punumber=7822829.

Bojarski, Mariusz; Testa, Davide Del; Dworakowski, Daniel; Firner, Bernhard; Flepp, Beat; Goyal, Prasoon et al.: End to End Learning for Self-Driving Cars. Available online at http://arxiv.org/pdf/1604.07316v1

Fossen, Thor I.; Pettersen, K. Y.; Nijmeijer, H. (2017): Sensing and control for autonomous vehicles. Applications to land, water and air vehicles / Thor I. Fossen, Kristin Y. Pettersen, Henk Nijmeijer, editors. Cham, Switzerland: Springer (Lecture notes in control and information sciences, 0170-8643, volume 474). Available online at http://link.springer.com/ BLDSS

2011 International Joint Conference on Neural Networks (IJCNN 2011 - San Jose). San Jose, CA, USA.

2016 IEEE Intelligent Vehicles Symposium (IV). Gotenburg, Sweden.

2016 IEEE Vehicular Networking Conference (VNC). Columbus, OH, USA.

2017 IEEE 47th International Symposium on Multiple-Valued Logic (ISMVL). Novi Sad, Serbia.

ABI Research (2016): Connected Automotive Tier One Suppliers. ABI Research.

Accenture (1/10/2014): Connected Vehicle Survey Global.

China Unicom (2017): Edge Computing Technology. China Unicom.

ETSI ITS (2017): Certificate Policy for Deployment and Operation of European Cooperative Intelligent Transport Systems (C-ITS). ETSI ITS.

European Automotive and Telecom Alliance (5/19/2017): Codecs at Brussels.

Infineon (2016): Your path to robust and reliable in-vehicle networking. Infineon’s automotive networking solutions. Infineon.

OVERSEE (2011): Use Case Identification. OVERSEE.

Renesas Electronics Corporation (2016): Renesas Automotive. Renesas Electronics Corporation.

U.S. Department of Transportation (2016): Privacy Impact Assessment for V2V NPRM. U.S. Department of Transportation.

Altintas, Onur (11/13/2016): A Communication-centric Look at Automated Driving.

Amditis, Angelos (7/25/2016): Automated Driving - the present and beyond.

Baldessari, Roberto (4/4/2017): Trends and Challenges in AI and IoT for Connected Automated Driving.

Bansal, Gaurav (3/27/2015): The Role and Design of Communications for Automated Driving.

Bolignano, Dominique (4/4/2017): In-vehicle technology enabler.

Bonte, Dominique; Hodgson, James; Menting; Michela (8/11/2015): SDC, ADAS Sensor Fusion, and Machine Vision.

Chen, Shitao; Zhang, Songyi; Shang, Jinghao; Chen, Badong; Zheng, Nanning (2017): Brain-inspired Cognitive Model with Attention for Self-Driving Cars. In IEEE Trans. Cogn. Dev. Syst., p. 1. DOI: 10.1109/TCDS.2017.2717451.

Dimitrakopoulos, George (2017): Current Technologies in Vehicular Communication. Cham: Springer International Publishing; Imprint: Springer.

Eckert, Alfred (11/12/2015): From Assisted to Automated Driving.

Fabozzi, Frank J.; Kothari, Vinod (Eds.) (2008): Introduction to Securitization. Hoboken, NJ, USA: John Wiley & Sons, Inc.

Filippi, Alessio; Klaassen, Marc; Roovers, Raf; Daalderop, Gerardo; Walters, Eckhard; Perez, Clara Otero (10/21/2016): Wireless connectivity in automotive V2X over 802.11p and LTE: a comparison.

Fleming, William J. (2008): New Automotive Sensors—A Review. In IEEE Sensors J. 8 (11), pp. 1900– 1921. DOI: 10.1109/JSEN.2008.2006452.

Flore, Dino (2/27/2017): 5G V2X.

Fogue, Manuel; Garrido, Piedad; Martinez, Francisco J.; Cano, Juan-Carlos; Calafate, Carlos T.; Manzoni, Pietro: A Realistic Simulation Framework for Vehicular Networks. In George Riley, Francesco Quaglia, Jan Himmelspach (Eds.): Fifth International Conference on Simulation Tools and Techniques. Desenzano del Garda, Italy.

Förster, David (2017): Verifiable Privacy Protection for Vehicular Communication Systems. Wiesbaden: Springer Fachmedien Wiesbaden.

Försterlin, Fank (3/4/2017): Automated Driving.

Gage, Tom (2017): Improving vehicle cybersecurity. ATIS.

Gleave, Steer Davies; Frisoni, Roberta; Dionori, Francesco; Vollath, Christoph (2014): Technology options for the European electronic toll service. European Commission.

Gunther, Hendrik-Jorn; Mennenga, Bjorn; Trauer, Oliver; Riebl, Raphael; Wolf, Lars: Realizing collective perception in a vehicle. In: 2016 IEEE Vehicular Networking Conference (VNC). Columbus, OH, USA, pp. 1–8.

Herrtwich, Ralf (4/4/2017): What’s in a Map?

ILIE, Irina (2016): ACEA Strategy Paper on Connectivity. ACEA.

Kandel, Paul; Mackey, Matt (5/25/2016): The road to autonomous driving.

Kenney, John B. (2011): Dedicated Short-Range Communications (DSRC) Standards in the United States. In Proc. IEEE 99 (7), pp. 1162–1182. DOI: 10.1109/JPROC.2011.2132790.

Kisacanin, Branislav: Deep Learning for Autonomous Vehicles. In : 2017 IEEE 47th International Symposium on Multiple-Valued Logic (ISMVL). Novi Sad, Serbia, p. 142.

Lakrintis, Angelos (2017): Mentor Graphics DRS360: Utilizing Raw Data for Level 5 Autonomous Driving. Strategy Analytics.

Lanctot, Roger (9/26/2016): HERE Open Location Platform Offers New Data Sharing Model and Monetization Opportunities for the Auto Industry.

Lauxmann, Ralph (4/4/2017): Enhanced vehicle automation functions to improve road safety.

Locks, Olaf; Winkler, Gerd (3/8/2017): Future Vehicle System Architecture ASAM General Assembly.

Lunt, Martin (3/13/2017): AUTOSAR Adaptive Platform.

Magney, Phil (5/25/2017): The Enablement of Automated Driving.

Mak, Kevin (2017): Intel's New Autonomous Vehicle Center: The Importance of Cross-Domain Development. Strategy Analytics.

Mak, Kevin (5/372017): ARM Mali-C17: Image Signal Processor Targeting Automotive Multi-Camera Systems. Strategy Analytics.

Mak, Kevin: Intel's New Autonomous Vehicle Center: The Importance of Cross-Domain Development. Mariani, Riccardo (5/25/2017): Applying ISO 26262 to ADAS and automated driving.

Meixner, Gerrit; Müller, Christian (2017): Automotive user interfaces. New York NY: Springer Berlin Heidelberg.

Misener, Jim (7/27/2015): Applying Lessons Learned to V2X Communications for China.

Mueller, Juergen (6/9/2017): TS-ITS100 RF Conformance Test System for 802.11p.

Ozbilgin, Guchan; Ozguner, Umit; Altintas, Onur; Kremo, Haris; Maroli, John: Evaluating the requirements of communicating vehicles in collaborative automated driving. In : 2016 IEEE Intelligent Vehicles Symposium (IV). Gotenburg, Sweden, pp. 1066–1071.

Pruisken, Stefan (9/19/2017): Highly accurate navigation in the age of automated driving.

R. Saracco; S. Péan (3/21/2017): 5G the enabler of connected and autonomous vehicles.

Redfern, Richard (2015): Key Performance Indicators for Intelligent Transport Systems. AECOM Limited.

Riley, George; Quaglia, Francesco; Himmelspach, Jan (Eds.): Fifth International Conference on Simulation Tools and Techniques. Desenzano del Garda, Italy.

Schaub, Norbert; Becker, Jan (6/27/2017-6/28/2017): Context-embedded vehicle technologies.

Schwarz, Stefan; Philosof, Tal; Rupp, Markus (2017): Signal Processing Challenges in Cellular-Assisted Vehicular Communications. Efforts and developments within 3GPP LTE and beyond. In IEEE Signal Process. Mag. 34 (2), pp. 47–59. DOI: 10.1109/MSP.2016.2637938.

Sermanet, Pierre; LeCun, Yann: Traffic sign recognition with multi-scale Convolutional Networks. In : 2011 International Joint Conference on Neural Networks (IJCNN 2011 - San Jose). San Jose, CA, USA, pp. 2809–2813.

Shulman; Deering (2007): Vehicle safety communications in the U.S. Ford Motor Company; General Motors Corporation.

Steiger, Eckard (4/4/2017): In-Vehicle Technology Enabler for CAD.

Subash, Scindia; Lunt, Martin (3/8/2017): E/E architecture in a connected world.

Takada, Hiroaki (6/8/2012): Introduction to Automotive Embedded Systems.

Torsten Schütze; Rohde & Schwarz SIT GmbH (2008): Introduction. In Frank J. Fabozzi, Vinod Kothari (Eds.): Introduction to Securitization. Hoboken, NJ, USA: John Wiley & Sons, Inc, pp. 1–12.

Tuohy, Shane; Glavin, Martin; Hughes, Ciaran; Jones, Edward; Trivedi, Mohan; Kilmartin, Liam (2015): Intra-Vehicle Networks. A Review. In IEEE Trans. Intell. Transport. Syst. 16 (2), pp. 534– 545. DOI: 10.1109/TITS.2014.2320605.

Vantomme, Joost: Frequency bands for V2X. ACEA. 12/1972016.

Verlekar, Vibhav (8/30/2017): Rapid Evolution of ‘The Connected Car’ Changing to the Digital Lane.

Yang, Bo; Zhou, Lipu; Deng, Zhidong (2013): Artificial cognitive model inspired by the color vision mechanism of the human brain. In Tsinghua science and technology 18, pp. 51–56.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.118.122.46