The overall goal of the platform is to carry out research to help enable Internet of Vehicles systems that are both technically resilient and useful, and fundamentally support human-centred, socially aligned values. In doing so, we encourage researchers to apply for funding in this area, addressing challenges around AI, human-system interaction, multi-agent systems and security. Specifically, there are 4 key themes: interaction, automation, services, and security. Each are detailed below.
This theme focuses on the human-in-the-loop in automated systems. For example, how does the human consent to the sharing of data, the joining of a platoon, or giving way at an intersection. How does this human interaction occur within an autonomous system? How to enable trust in the system? These are just a few of the interaction design challenges within the platoon. There are related ones in terms of data sharing to select a vehicle even before getting into traffic; there are choices within the vehicle around service selection from insurance to entertainment and how that information will be shared within services touching the IoV, just to name a few. The theme should draw on fundamental research in HCI from cognitive load models to designs to support decision making, to research around heads up displays. Exploring these interactions from a human-centered, human-owns-the-loop, not just human-in-the-loop, perspective will also add complexity to design considerations (it would be easier not to ask a citizen for consent about using their journey information, but just accessing it – but this lack of consent, for instance, is unpalatable socially). Building for that complexity, however, we anticipate, will also enhance system resilience and success. It is more complex to model, but failure to do so leads to rejection (like we have seen in the UK over untrusted smart meters) and security failures (the Domain Name Server denial of service attacks).
Within the IoV a user can easily become overwhelmed by the number of decisions that need to be made, from whether or not to join a platoon to consenting whether certain data or private information is sent to other vehicles or other systems. These decisions can be complex, involve other stakeholders, and several trade offs need to be considered, e.g. time vs effort. This theme investigates ways to support and even automate such decisions within a range of IoV scenarios. This involves several key challenges: (1) how to model the user preferences and infer user preferences from limited knowledge or peer knowledge, (2) how to automate the negotiation between multiple stakeholders when there is a conflict of interest (e.g. who should be at the front in the platoon, or who should get priority at intersections, or what data should be shared), (3) how to reward users and incentivise the “right” type of behaviour, (4) how to make optimal decisions (e.g. in the context of routing) in the face of uncertainty.
As vehicles share data over the IoV, there is great potential for new service offerings to emerge. Vehicles can organise into platoons on motorways to save fuel, they can coordinate their travel plans to avoid congestion at bottlenecks or drivers can offer ridesharing opportunities to others. When choosing what services to participate in, human drivers or passengers will need to consider how the terms of that service align with their values, and what the benefits and costs of participation are. Designing effective services will therefore require both deep understanding of how to design (1) privacy-aware and (2) incentive-aware services. Regarding the first, services must flexibly deal with consumers that are willing to share more or less data. In our human-centred approach, our focus is to privilege privacy by design: services where possible need to work on anonymous, aggregated data; where personal data is required, key decisions (e.g., choosing routes or departure times) should be made on the consumer’s personal and trusted devices rather than on a centralised and monolithic server. The second and complementary challenge acknowledges that often travelers need incentives to change their behaviour patterns, e.g., to join and lead a platoon, to avoid peak times or to switch to public transport to reduce congestion. There is considerable work in how incentives can work to increase the efficiency of the overall system, but to deliver these approaches within the complex and dynamic system that is the IoV, taking into account the personal preferences of travelers is a new and complex domain space.
This theme focuses around the cyber security and trust challenges of human-to-vehicle (H2V), vehicle-to-vehicle (V2V), vehicle-to-infrastructure (V2I) communication. Example research challenges for the H2V include how to communicate trusted information to the human occupants of a vehicle in potentially untrusted environments, including effective communication of information derived from conflicting (maybe compromised) data sources. To deliver such secure H2V communications, we need to understand, within the V2I how to develop trusted communications between vehicle and infrastructure to ensure system security, reliability, confidentiality, integrity, availability and timeliness. How to deliver that V2I data requires exploration of V2V challenges around how to manage appropriate authentication and encryption between vehicles in ad-hoc networks. In exploring these kinds of queries in this theme, we have opportunities to contribute to generalised frameworks that are adaptable for various use cases.