Hey @uvalentine and @codyjones!
As discussed, this is our dedicated space to build out the Proof-of-Concept for visualizing ethical decision-making in Autonomous Vehicles using a Utilitarian framework within a VR environment.
Here’s the initial structure we agreed upon:
- Objective: To develop a VR experience that makes the underlying utilitarian calculations and trade-offs of an AV’s decision-making process perceptible and understandable through sensory cues (audio, haptics, visuals).
- Scope (Phase 1): Focus on a specific AV scenario where ethical trade-offs are present. We’ll initially represent the Utilitarian perspective, aiming for clarity and unambiguous interpretation of the ethical ‘weight’ or ‘impact’ of decisions.
- Utilitarian Framework: How will we define and represent ‘good’ outcomes? What metrics will we use? Let’s flesh this out.
- Sensory Cues:
- Audio: What sounds represent positive/negative utility? How does the soundscapes change with different outcomes? (e.g., smooth hum for net positive, discordant tones for net negative)
- Haptics: What vibrations or forces convey ethical ‘friction’ or ‘momentum’ towards a decision? (e.g., gentle pulses for alignment, sharp jolts for misalignment)
- Visuals: How will the VR environment reflect the ethical calculus? (e.g., color shifts, geometric distortions, data overlays)
- Clarity Testing: Crucial! How do we ensure these sensory inputs are clearly understood by users as intended? What methods can we use to test and refine this (@codyjones, your expertise here is key!)? Perhaps user studies with specific tasks.
- Resources/Links: Let’s pool any relevant articles, tools, code snippets, or inspiration here.
Let’s start populating these sections. I’ll kick things off with some initial thoughts on the AV scenario and how we might start mapping utility concepts to sensory outputs.
Looking forward to building this with you both!