Cooperative Intelligence for Autonomous Driving

来源 :ZTE Communications | 被引量 : 0次 | 上传用户:lsssml1990
下载到本地 , 更方便阅读
声明 : 本文档内容版权归属内容提供方 , 如果您对本文有版权争议 , 可与客服联系进行内容授权或下架
论文部分内容阅读
  Abstract: Autonomous driving is an emerging technology attracting interests from various sectors in recent years. Most of existing work treats autonomous vehicles as isolated individuals and has focused on developing separate intelligent modules. In this paper, we attempt to exploit the connectivity among vehicles and propose a systematic framework to develop autonomous driving techniques. We first introduce a general hierarchical information fusion framework for cooperative sensing to obtain global situational awareness for vehicles. Following this, a cooperative intelligence framework is proposed for autonomous driving systems. This general framework can guide the development of data collection, sharing and processing strategies to realize different intelligent functions in autonomous driving.
  Keywords: autonomous driving; cooperative intelligence; information fusion; vehicular communications and networking
  1 Introduction
  s an emerging technology attracting exponentially growing research and development interests from various sectors including academia, industry and government, autonomous driving is expected to bring numerous benefits to our everyday life, including increased safety, alleviation of traffic congestion, improved parking, and more efficient utilization of transportation resources, just to name a few (see e.g., [1]-[3]).
  Though research on driverless vehicles dates back to as early as 1920s and the first prototype of autonomous vehicles dates back to 1980s, they have not attracted people’s attention until a little more than 10 years ago due to the limitation of hardware techniques. However, In the past decade, there have been huge improvements on many supporting techniques such as sensing technology, high performance computing, artificial intelligence, computer vision, and wireless communications and network. These bring a promising future for driverless vehicles. The realization of driverless vehicles and their common adoption in people’s daily life have been put on agenda. Nowadays, many research institutes and companies all over the world have already put the driverless vehicles to road tests, even on public roads. In United States, many states are giving permission to allow driverless vehicles to drive in their public transportation system and many companies such as Uber and Google are putting their driverless cars into operation.
  Throughout the recent years of research and development in autonomous driving, however, the starting point has always been the cases with human drivers. More specifically, existing vehicle automation work has been almost exclusively focused on developing modules to assist human driving. These include auto?parking, off?lane drift warning, automatic emergency brake, and so on (see e.g. [4]-[6]). Based upon these work, intelligence has been gradually introduced into more and more driving functions, with the hope that one day, with integration of all these individual modules, the vehicle can be fully intelligent and can accomplish autonomous driving.   This strategy enables the developed technologies to be deployed into the current transportation systems and can bring investment return within a short time window. However, this strategy also greatly limits the framework of the autonomous vehicle research. This is because the inherited module?based approach, which was originally introduced to assist human driving, does not bear any systematic vision or plan on achieving the level of overall intelligence to take over human driving. Moreover, such a framework leads to the mentality of treating the autonomous vehicles as isolated individuals, which naturally inherits the human driving mechanism, but is a long way from maximally exploiting the machine intelligence or utilizing the potential of vehicle cooperation (see e.g. [7]-[9]). As a result, by imposing the human driving mentality onto machines, this widely adopted framework results in very high cost, while achieving very limited reliability [10].
  In this paper, instead of treating the autonomous vehicles as isolated individuals, we introduce interconnectivity and hence cooperation among vehicles for the system design. Based upon this capability, we investigate on how the redundancy provided by different types of sensors can be best utilized to improve the reliability in sensing and how the spatial diversity provided by different vehicles can be exploited to extend the vision range in sensing. The heterogeneous vehicular network structure is proposed to support the information sharing during the cooperation. A general hierarchical information fusion framework is then established for the cooperative sensing to achieve global situational awareness in autonomous driving. An example of cooperative simultaneous localization and mapping with multiple object tracking (SLAM?MOT) is presented to illustrate the framework. Then, this sensing framework is extended to form a cooperative intelligence framework in designing the autonomous driving system. Last but not least, the issues associated with the communications system design are discussed and some important remarks are given.
  2 Cooperative Sensing Obtains Global
  Situational Awareness
  Sensing is the fundamental task in autonomous vehicles, which provides the necessary information for intelligent driving. Since the first prototype, many sensing techniques have been applied in autonomous vehicles. In existing autonomous vehicles, there are usually many sensing devices of many different types to execute different sensing tasks at different sensing ranges. To enable the intelligence, instead of providing the raw sensing data, the sensing module in autonomous vehicles extracts relevant sensing information via data analytics. So far, lots of efforts have been devoted towards better data analytics to improve the quality of information extracted from the sensing data. Accordingly, to improve the sensing performance, instead of installing more expensive sensing devices, people are now focusing more on designing better data analytical methods to improve the quality of sensing information while using low?cost sensing devices.   In most existing work, since the starting point is to assist human drivers or to mimic the sensing capability of human drivers, in their design, an autonomous vehicle is treated as an isolated individual. Furthermore, a particular sensing task in a given sensing range is usually accomplished by a very limited number of sensors. As a result, if any of them is not working properly, the entire system does not have any backup for correction. For example, the notorious Tesla accident in Florida in May 7, 2016 occurred since the vehicle only relied on the camera and computer vision techniques to facilitate obstacle detection and unfortunately this technique failed to recognize the truck due to the lighting condition. This framework is somewhat like the situation that a single driver is driving the vehicle and there is no other people to correct his/her conducts when needed. The only advantage of the autonomous vehicle over human driver is that the machine can never get fatigue and it has a lower error rate.
  To improve the reliability of the entire system, recently there have been extensive research on extracting the sensing information from the data of multiple on?board sensors from the same vehicle such as light detection and ranging (LiDAR), various kinds of cameras, radar with different wavelengths and detection ranges, and sonar (see e.g. [11]-[20]). However, due to cost considerations and space limit, the sensors that can be installed on a single vehicle are very limited. This means the data sources are still limited. More importantly, they are still on the same vehicle, and their sensing ranges are limited by the position and vision range of the vehicle and cannot avoid possible blind spots.
  On the other hand, in the entire system, there are multiple autonomous vehicles, each equipped with their own sensors. If information can be cooperatively extracted from the sensing data collected from different locations, this can overcome the limitation of individual vehicle and achieve the beyond the?vision?range awareness. In addition, in the intelligent transportation system, there are many road side units (RSUs) which are constantly monitoring the traffic conditions. If these information can be shared with the autonomous vehicles, further extension of the range of sensing can be achieved and hopefully the global situational awareness can be obtained. Different sensing strategies are illustrated in Fig. 1. In Fig. 1a, the white car tries to sense the environment by itself. However, since it is located in a relatively crowded traffic scenario and its vision range is blocked by the red car and the green car, resulting in a very limited sensing range. When communications and information sharing are enabled among the three vehicles in the figure, as shown in Fig. 1b, the sensing range is greatly extended. If an RSU is also in the scene and shares its sensing information with all the vehicles, we can see that the sensing range would be further extended as seen in Fig. 1c. If information sharing among the entire intelligent system is enabled, the global situational awareness would be obtainable. From these illustrations, we see that with cooperative sensing, the troublemakers are now serving as helpers during sensing process.   3 Heterogeneous Vehicular Network
  Structure
  To obtain the global situational awareness as illustrated above, sensing information across the entire system should be collected and shared via a supporting communications and networking infrastructure. We establish a global sensing and processing framework as illustrated in Fig. 2. There are mainly three types of entities in the framework:
  Autonomous vehicles are the major participants in the transportation system. They are using the road and changing the traffic conditions. At the same time, they would be able to sense the road and traffic conditions and other participants such as other vehicles, pedestrian, and animals.
  Intelligent transportation infrastructure refers to the auxiliary equipment installed in the transportation system such as road side units (see e.g. [21]-[23]). They usually would not participate in the transportation system and only collects information about the road and traffic conditions. Their original purpose is to report the basic traffic monitoring information to a control center to guide the traffic. However, within our framework, we allow them to share some information such as traffic flow condition and road congestion condition with the autonomous vehicles via vehicle?to?infrastructure (V2I) communications [24] to improve the situational awareness of the autonomous vehicles.
  Cloud processing center refers to a global center that collects all processed sensing information from the entities across the entire system and creates a global situational awareness of the traffic conditions in the system (see e.g. [25]). When needed, the center could send intervening control signals to some vehicles to guide their behavior for improved system efficiency or security. For example, if the center learned that a particular road is jammed, it can send control information to re?route some vehicles to avoid further traffics into that road and alleviate the jam condition.
  With this proposed framework, the global situational awareness could be constructed by collecting, sharing and processing all sensing information across the entire system, including those provided by onboard sensors of autonomous vehicles and those from the intelligent transportation infrastructure. To support the information sharing within this framework, the network would be heterogeneous with different communication links meeting a wide range of QoS requirements:
  Among autonomous vehicles: To facilitate the cooperative sensing, a large amount of sensing data need to be shared among different autonomous vehicles. Depending on the time sensitivity of the sensing data, different requirements on the communication delay will be needed and depending on the volume of the sensing data, different bandwidth will be demanded. Generally speaking, high bandwidth and low latency communication techniques need to be exploited to support the vehicular network (see e.g. [26]-[31] for some proposed solutions).   From the intelligent transportation infrastructure to the autonomous vehicles: The sensing information provided by the intelligent transportation units are usually at low rate. But they need to be communicated to the autonomous vehicles with low latency. In addition, the communication range of the intelligent transportation infrastructure is usually limited and the vehicles are usually moving at high speed. As a result, frequent handoffs between a vehicle and different infrastructure units will be needed. This must be considered in the design of the network infrastructure of the entire system.
  Between autonomous vehicles and the cloud processing center: The communications between autonomous vehicles and the cloud processing center are bi?directional. For the uplink transmissions, the data volume is usually large, but the latency requirement would usually be low since the global road and traffic condition typically do not change that fast. Hence, the bandwidth would be of more concern; for the downlink transmissions, usually the more time sensitive control information with small volume will be involved. As a result, the latency would be the major concern.
  From the intelligent transportation infrastructure to the cloud processing center: This is already included in the current intelligent transportation system and can be directly applied to our proposed framework.
  With the heterogeneous vehicular network to support the information sharing among different entities, we would be ready to develop the cooperative sensing and cooperative intelligence framework.
  4 A Hierarchical Information Fusion
  Framework for Cooperative Sensing
  Given the possibility and many advantages of the collaborations among vehicles during sensing, the main challenge against collaborations among different sensors is the heterogeneity in data, even for the relatively simpler case of collaborating sensors located at the same vehicle. There are some papers in the literature concerning the latter case. However, they are all limited to some specific combinations of some specific sensors and the resultant algorithm cannot be applied to general cases (see e.g. [32]-[35]). When sensors’ properties change, or as new sensors or sensor types are introduced, one has to completely re?formulate the fusion problem and re?solve it. Moreover, these algorithms cannot be applied to combine sensor information obtained from different vehicles. These issues are all due to the fact that the existing work tried to directly work on the heterogeneous data and there is no general framework to guide how the heterogeneous data should be collected, shared and processed to achieve the desired situational awareness.   To facilitate the design of cooperative sensing, here we first categorize the data based upon how the content of the data is related to the driving tasks:
  Data: It refers to the raw sensor data. Given that there are many different kinds of sensors in the system, the data take different forms and are reported at different rates. They also contain different information. In general, most raw sensor data cannot be directly used in the intelligent modules and have to be processed to provide driving?related guidance, e.g. the point cloud provided by LiDAR or the pictures captured by camera. In the context of cooperative sensing, sharing these types of data is also an unrealistic option due to their sizes.
  Information: It refers to the data describing the general driving environment that can be used in driving tasks. Examples would be the mapping for the area around the vehicle or identification of objects in the traffic scene. The “information” is usually a result of some preliminary processings of the raw sensor data.
  Knowledge: It refers to the data containing specific information that can readily guide the intelligent decisions in autonomous driving. Examples would be the driving status of other vehicles, the intentions of other objects in the traffic scene, the road condition along a particular route, and so on. One can treat the knowledge as the further processed information. The major difference between “information” and “knowledge” is that while the information can usually be extracted from the raw data provided by a single sensor, the knowledge usually is the result of processing data from multiple sensors or even multiple vehicles.
  It should be noted that though we are introducing these different categories, they are still simply the data during the fusion process and there are no clear distinction line among them. They are introduced here to describe the readiness of the data for intelligent driving tasks and also indicate the conciseness of data.
  With this categorization of data, we propose a hierarchical information fusion procedure for cooperative sensing to obtain the global situation awareness in autonomous driving (Fig. 3). As seen in the figure, there are multiple levels of data processing and information fusion depending on the nature of data and how to obtain the required information/knowledge from the data. At the lowest level, data from different sensors are processed to extract information about the environment. Then, the extracted information can be shared among different vehicles to provide a more comprehensive knowledge about the driving situation.   4.1 An Illustrative Example: Cooperative SLAM?MOT
  To better explain our proposed framework, here we use one important task of autonomous driving, simultaneous localization and mapping with moving objects tracking (SLAM?MOT) [36] as an illustrative example. The typical data used for localization would be the GPS and inertial measurement unit (IMU) data and the typical data for mapping and moving objects tracking are LiDAR, camera, sonar, radar, and so on. To achieve the best performance in cooperation, one would expect all data shared among all users. However, this creates huge communication burden, especially for those high?volume data such as cloud points from LiDAR and images from cameras. So, they have to be processed locally to some form of information before they can be shared among different users. However, at the same time, the vehicles are distributed in space and the vision ranges and views of the vehicles are quite different. This implies that there must be some schemes to handle the heterogeneity in space of these data. In addition, different autonomous vehicles might be equipped with different types of sensors. Therefore, there are also heterogeneity in information of these data. To implement the hierarchical information fusion framework we propose, one is facing the challenges introduced by the heterogeneity.
  One possible solution to the data heterogeneity is to find a uniform representation of these data via local processing. For example, the local sensor data can be processed to generate the occupancy grid map (OGM) [37] for the area around a vehicle (e.g. our work in [38]). With the OGM representation, all forms of raw sensor data are converted to the uniform information about the occupancy states of map grids in space, which solves the problem of heterogeneity in information. At the same time, the location and vision range of different vehicles are reflected on the range that the OGM generated by each individual vehicle covers, which solves the issue of heterogeneity in space. Moreover, the quantified occupancy probabilities reflects the quality and confidence levels of the sensor data, which can guide the data fusion process. With these local processing and mapping, instead of the raw sensor data, the local maps can be shared among different vehicles to provide a global map, collecting the spatial diversity provided by multiple vehicles at different locations and achieving the beyond?the?vision?range situational awareness. This also greatly alleviate the challenges for the communication burden using our proposed fusion framework. Notice that one important feature of our proposed hierarchical information fusion framework is that one could flexibly adjust the fusion strategy based upon the data volume and the supporting communication network. Generally speaking, for high?volume data, they would be processed locally at lower levels to avoid high communication burden. On the other hand, when communication network is capable of supporting high?volume data, one can send more unprocessed raw data for less loss of information during the fusion process.   Then, the information can be further processed to obtain other information. For example, one can analyze the series of occupancy maps obtained over a period and then identify objects based upon the dynamics of the occupancy grids on the map. Large occupancy area corresponds to a large object such as cars or surrounding buildings, and small occupancy area corresponds to a small object such as motorcycles or pedestrians; fast?moving occupancy area corresponds to high?speed targets such as cars or motorcycles, and small?moving occupancy area corresponds to low?speed targets such as pedestrians or surrounding buildings. With this multiple levels of data processing, traffic information is roughly constructed. On the other hand, some data can directly provide the knowledge that is needed for driving. Examples would be the GPS data that provides the localization of the ego vehicle and the IMU data that provides the speed and acceleration of the ego vehicle. The self?awareness of these data can also then be combined with the vehicles’ observations on other objects using other sensors to provide the dynamic localization and tracking on other objects (e.g. our work in [39]). The SLAM?MOT system for the examples described above using our hierarchical information fusion framework is summarized in Fig. 4.
  Then, to facilitate the driving optimization and decisions, one need to construct models to describe the dynamics of the traffic participants, including vehicles, pedestrians, animals, and so on. The behavior pattern of a single participant as well as the interactions among multiple participants in the system need to be considered in constructing the dynamic models. Based upon past historical data, models can be selected and model parameters can be estimated. Based upon the model and current observed data, future behavior or dynamics of the participants can then be predicted. To provide accurate prediction, the model must be composite and probabilistic. For being composite, multiple single models should be constructed to describe the possible dynamics of the participants working in different patterns (such as normal, abnormal, conservative, aggressive, fatigue drivers, and so on. For being probabilistic, probabilities should be assigned to the behavior patterns and also the intentions of the participants to cover all possibilities. In this way, intelligent decisions would then be developed. The intelligent decision process will in turn provide feedback to the information collection and fusion process to indicate what kind of information is needed at what level of quality. The intelligent framework is illustrated in Fig. 5.   5 A Cooperative Intelligence Framework for
  Autonomous Driving
  With the global situational awareness provided by cooperative sensing, we can further include the collaborations among different vehicles in the decision process to obtain a cooperative intelligence framework. With this framework, we focus on how the information is collected, shared and involved during the intelligent driving process. The starting point of the information is the self?awareness, i.e. the driving decision or intention of the vehicle itself. For fully autonomous vehicles, this will be simply directly provided by the decision module; for half?autonomous vehicles where a driver is still controlling vehicle but the vehicle has the capability to observe and analyze the behavior of the driver, this can be provided by the vehicle’s intelligent driver behavior analysis module. The self?awareness of multiple cooperating intelligent vehicles can be shared among each other. However, for other non?intelligent vehicles, cooperative sensing must be conducted to obtain the knowledge about them supplementing the self?awareness of the intelligent vehicles. With the proposed cooperative sensing framework, spatial diversity will be introduced to provide a more comprehensive picture on the traffic environment and overcome the vision ranges of individual vehicles. During this process, it is important to determine which kind of information or knowledge should be collected, processed, and shared among the vehicles.
  6 Concluding Remarks and Prospects
  In this paper, we exploit the collaborations among vehicles for autonomous driving. We proposed a hierarchical information fusion framework for cooperative sensing to provide global situational awareness for autonomous driving and a cooperative intelligence framework to encourage the collaborations among vehicles in their driving decision processes for improved system efficiency and security. The proposed frameworks are general and can provide valuable guidance to design the individual sensing and decision modules in autonomous driving.
  One important issue for the proposed hierarchical information fusion and cooperative intelligence framework is the data sharing among different entities. This highly depends on the V2X communications and networking techniques. One could tackle this issue from two aspects: (1) the design and management of the information fusion procedure, considering the quality of service provided by the current vehicular communication infrastructure; and (2) the design of the vehicular communication infrastructure to satisfy the data sharing requirement in the desired cooperative sensing and intelligence framework. On the one hand, the performance of the cooperative sensing and intelligence would be limited by the communications. On the other hand, the cooperative sensing and intelligence provides good motivation and guidance to the design of a better vehicular communications and networking structure.   References
  [1] MAURER M, GERDES J C, LENZ B, et al. Autonomous Driving. Berlin, Heidelberg, Germany: Springer, 2016
  [2] YANG J, COUGHLIN J F. In?Vehicle Technology for Self?Driving Cars: Advantages and Challenges for Aging Drivers [J]. International Journal of Automotive Technology, 2014, 15(2): 333-340. DOI: 10.1007/s12239?014?0034?6
  [3] Bimbraw K. Autonomous Cars: Past, Present and Future a Review of the Developments in the Last Century, the Present Scenario and the Expected Future of Autonomous Vehicle Technology [C]//12th International Conference on in Informatics in Control, Automation and Robotics (ICINCO). Colmar, France, 2015: 191-198
  [4] GAIKWAD V, LOKHANDE S. Lane Departure Identification for Advanced Driver Assistance [J]. IEEE Transactions on Intelligent Transportation Systems, 2014: 1-9. DOI: 10.1109/tits.2014.2347400
  [5] LIANG Y, REYES M L, LEE J D. Real?Time Detection of Driver Cognitive Distraction Using Support Vector Machines [J]. IEEE Transactions on Intelligent Transportation Systems, 2007, 8(2): 340-350. DOI:10.1109/TITS.2007.895298
  [6] CHENG S Y, TRIVEDI M M. Turn?Intent Analysis Using Body Pose for Intelligent Driver Assistance [J]. IEEE Pervasive Computing, 2006, 5(4): 28-37. DOI: 10.1109/MPRV.2006.88
  [7] OKUDA R, KAJIWARA Y, TERASHIMA K. A Survey of Technical Trend of ADAS and Autonomous Driving [C]//2014 International Symposium on VLSI Technology, Systems and Application (VLSI?TSA), Taiwan, China, 2014: 1-4. DOI: 10.1109/VLSI?TSA.2014.6839646
  [8] LIN S?C, ZHANG Y, HSU C?H, et al. The Architectural Implications of Autonomous Driving: Constraints and Acceleration [C]//Twenty?Third International Conference on Architectural Support for Programming Languages and Operating Systems. Williamsburg, USA, 2018: 751-766. DOI: 10.1145/3173162.3173191
  [9] LIU S S, TANG J, ZHANG Z, et al. Computer Architectures for Autonomous Driving [J]. Computer, 2017, 50(8): 18-25. DOI: 10.1109/mc.2017.3001256
  [10] DAZIANO R A, SARRIAS M, LEARD B. Are Consumers Willing to Pay to Let Cars Drive for Them? Analyzing Response to Autonomous Vehicles [J]. Transportation Research Part C: Emerging Technologies, 2017, 78: 150-164. DOI: 10.1016/j.trc.2017.03.003
  [11] CHO H, SEO Y W, KUMAR B V K V, et al. A Multi?Sensor Fusion System for Moving Object Detection and Tracking in Urban Driving Environments [C]//IEEE International Conference on Robotics and Automation (ICRA). Hong Kong, China, 2014: 1836-1843. DOI: 10.1109/ICRA.2014.6907100   [12] PONZ A, RODR?GUEZ?GARAVITO C H, GARC?A F, et al. Laser Scanner and Camera Fusion for Automatic Obstacle Classification in ADAS Application [M]//PONZ A, RODR?GUEZ?GARAVITO C H, GARC?A F, et al. eds. Communications in Computer and Information Science. Cham, Switzerland: Springer International Publishing, 2015: 237-249. DOI: 10.1007/978?3?319?27753?0_13
  [13] ZIEBINSKI A, CUPEK R, ERDOGAN H, et al. A Survey of ADAS Technologies for the Future Perspective of Sensor Fusion [M]//ZIEBINSKI A, CUPEK R, ERDOGAN H, et al. eds. Computational Collective Intelligence. Cham, Switzerland: Springer International Publishing, 2016: 135-146. DOI: 10.1007/978?3?319?45246?3_13
  [14] ASVADI A, PREMEBIDA C, PEIXOTO P, et al. 3D Lidar?Based Static and Moving Obstacle Detection in Driving Environments: An Approach Based on Voxels and Multi?Region Ground Planes [J]. Robotics and Autonomous Systems, 83: 299-311, 2016
  [15] DE SILVA V, ROCHE J, KONDOZ A. Fusion of LiDar and Camera Sensor Data for Environment Sensing in Driverless Vehicles [EB/OL]. (2018?03?29)[2019?03?01]. https://arxiv.org/abs/1710.06230v2
  [16] AUFR?RE R, GOWDY J, MERTZ C, et al. Perception for Collision Avoidance and Autonomous Driving [J]. Mechatronics, 2003, 13(10): 1149-1161. DOI: 10.1016/s0957?4158(03)00047?3
  [17] SHIMONI M, TOLT G, PERNEEL C, et al. Detection of Vehicles in Shadow Areas Using Combined Hyperspectral and Lidar Data[C]//IEEE International Geoscience and Remote Sensing Symposium. Vancouver, Canada, 2011: 4427-4430. DOI: 10.1109/IGARSS.2011.6050214
  [18] CHAVEZ?GARCIA R O, AYCARD O. Multiple Sensor Fusion and Classification for Moving Object Detection and Tracking [J]. IEEE Transactions on Intelligent Transportation Systems, 2016, 17(2): 525-534. DOI: 10.1109/tits.2015. 2479925
  [19] GAO H B, CHENG B, WANG J Q, et al. Object Classification Using CNN?Based Fusion of Vision and LIDAR in Autonomous Vehicle Environment [J]. IEEE Transactions on Industrial Informatics, 2018, 14(9): 4224-4231. DOI: 10.1109/tii.2018.2822828
  [20] ASVADI A, GARROTE L, PREMEBIDA C, et al. Multimodal Vehicle Detection: Fusing 3D?LIDAR and Color Camera Data [J]. Pattern Recognition Letters, 2018, 115: 20-29. DOI: 10.1016/j.patrec.2017.09.038
  [21] SOU S I, TONGUZ O K. Enhancing VANET Connectivity through Roadside Units on Highways [J]. IEEE Transactions on Vehicular Technology, 2011, 60(8): 3586-3602. DOI: 10.1109/tvt.2011.2165739
  [22] BARRACHINA J, GARRIDO P, FOGUE M, et al. Road Side Unit Deployment: A Density?Based Approach [J]. IEEE Intelligent Transportation Systems Magazine, 2013, 5(3): 30-39. DOI: 10.1109/mits.2013.2253159   [23] REIS A B, SARGENTO S, NEVES F, et al. Deploying Roadside Units in Sparse Vehicular Networks: What Really Works and What does not [J]. IEEE Transactions on Vehicular Technology, 2014, 63(6): 2794-2806. DOI: 10.1109/tvt.2013.2292519
  [24] MILANES V, VILLAGRA J, GODOY J, et al. An Intelligent V2I?Based Traffic Management System [J]. IEEE Transactions on Intelligent Transportation Systems, 2012, 13(1): 49-58. DOI: 10.1109/tits.2011.2178839
  [25] WANG J, JIANG C, HAN Z. Internet of Vehicles: Sensing?Aided Transportation Information Collection and Diffusion [J]. IEEE Transactions on Vehicular Technology, 2018, 67(5): 3813-3825. DOI: 10.1109/tvt.2018.2796443
  [26] CHEN S Z, HU J L, SHI Y, et al. Vehicle?To?Everything (v2x) Services Supported by LTE?Based Systems and 5G [J]. IEEE Communications Standards Magazine, 2017, 1(2): 70-76. DOI: 10.1109/mcomstd.2017.1700015
  [27] ZHANG R, CHENG X, YAO Q, et al. Interference Graph?Based Resource?Sharing Schemes for Vehicular Networks [J]. IEEE Transactions on Vehicular Technology, 2013, 62(8): 4028-4039. DOI: 10.1109/TVT.2013.2245156
  [28] CHENG X, YANG L, SHEN X. D2D for Intelligent Transportation Systems: A Feasibility Study [J]. IEEE Transactions on Intelligent Transportation Systems, 2015, 16(4): 1784-1793. DOI: 10.1109/tits.2014.2377074
  [29] CHENG X, ZHANG R, YANG L. 5G?Enabled Vehicular Communications and Networking [M]. Cham, Switzerland: Springer International Publishing, 2018.
  [30] CHENG X, CHEN C, ZHANG W X, et al. 5G?Enabled Cooperative Intelligent Vehicular (5GenCIV) Framework: When Benz Meets Marconi [J]. IEEE Intelligent Systems, 2017, 32(3): 53-59. DOI: 10.1109/mis.2017.53
  [31] CHENG X, ZHANG R Q, YANG L Q. Wireless Toward the Era of Intelligent Vehicles [J]. IEEE Internet of Things Journal, 2019, 6(1): 188-202. DOI: 10.1109/jiot.2018.2884200
  [32] KIM J K, KIM J W, KIM J H, et al. Experimental Studies of Autonomous Driving of a Vehicle on the Road Using LiDAR and DGPS[C]//15th International Conference on Control, Automation and Systems (ICCAS). Busan, South Korea, 2015: 1366-1369. DOI: 10.1109/ICCAS.2015.7364852
  [33] JAIN A, SINGH A, KOPPULA H S, et al. Recurrent Neural Networks for Driver Activity Anticipation via Sensory?Fusion Architecture [C]//IEEE International Conference on Robotics and Automation (ICRA). Stockholm, Sweden, 2016: 3118-3125. DOI: 10.1109/ICRA.2016.7487478
  [34] XUE J R, WANG D, DU S Y, et al. A Vision?Centered Multi?Sensor Fusing Approach to Self?Localization and Obstacle Perception for Robotic Cars [J]. Frontiers of Information Technology & Electronic Engineering, 2017, 18(1): 122-138. DOI: 10.1631/fitee.1601873   [35] XIAO L, WANG R L, DAI B, et al. Hybrid Conditional Random Field Based Camera?LIDAR Fusion for Road Detection [J]. Information Sciences, 2018, 432: 543-558. DOI: 10.1016/j.ins.2017.04.048
  [36] WANG C C, THORPE C, THRUN S, et al. Simultaneous Localization, Mapping and Moving Object Tracking [J]. The International Journal of Robotics Research, 2007, 26(9): 889-916. DOI: 10.1177/0278364907081229
  [37] BOUZOURAA M E, HOFMANN U. Fusion of Occupancy Grid Mapping and Model Based Object Tracking for Driver Assistance Systems Using Laser and Radar Sensors [C]//IEEE Intelligent Vehicles Symposium, San Diego, USA, 2010: 294-300. DOI: 10.1109/IVS.2010.5548106
  [38] LI Y R, DUAN D L, CHEN C, et al. Occupancy Grid Map Formation and Fusion in Cooperative Autonomous Vehicle Sensing (Invited Paper) [C]//IEEE International Conference on Communication Systems (ICCS). Chengdu, China, 2018: 204-209. DOI: 10.1109/ICCS.2018.8689254
  [39] YANG P T, DUAN D L, CHEN C, et al. Optimal Multi?Sensor Multi?Vehicle (MSMV) Localization and Mobility Tracking [C]//IEEE Global Conference on Signal and Information Processing (GlobalSIP). Anaheim, USA, 2018: 1223-1227. DOI: 10.1109/GlobalSIP.2018.8646626
其他文献
关键词:梅核气;中医药疗法;苇茎厚朴汤  中图分类号:R25 文献标识码:B 文章编号:1007—2349(2004)01—0026—01
期刊
关键词:通便汤;老年功能性便秘;中医药疗法  中图分类号:R256.35 文献标识码:B 文章编号:1007—2349(2004)01—0049—01  笔者运用通便汤治疗老年功能性便秘64例收到良好疗效,现报告如下。
期刊
关键词:健脾化痰;肿瘤;治疗  中图分类号:R273  文献标识码:B  文章编号:1007—2349(2004)01—0051—02  运用健脾化痰法治疗肿瘤2例,效果显著,现总结如下。
期刊
关键词:四妙汤;遗精;中医药疗法  中图分类号:R256.54 文献标识码:B 文章编号:1007—2349(2004)01—0047—01  笔者自2002年3月至2003年4月间用自拟加味四妙汤治疗湿热下注型遗精36例,疗效满意,现报道如下。
期刊
Abstract: Estimating time?selective millimeter wave wireless channels and then deriving the optimum beam alignment for directional antennas is a challenging task. To solve this problem, one can focus
期刊
Abstract: Modern backup systems exploit data deduplication technology to save storage space whereas suffering from the fragmentation problem caused by deduplication. Fragmentation degrades the restore
期刊
Abstract: The temporal distance between events conveys information essential for many time series tasks such as speech recognition and rhythm detection. While traditional models such as hidden Markov
期刊
CHEN Changwen is currently Dean of School of Science and Engineering at the Chinese University of Hong Kong, Shenzhen, China. He also serves as Deputy Director of Peng Cheng Laboratory. He continues t
期刊
Victor C. M. Leung is a professor of electrical and computer engineering and holder of the TELUS Mobility Research Chair at the University of British Columbia (UBS), Canada. He has co?authored more th
期刊
Abstract: Reinforcement learning (RL) algorithm has been introduced for several decades, which becomes a paradigm in sequential decision?making and control. The development of reinforcement learning,
期刊