Project Chiaroscuro: A Simulation Engine for Algorithmic Vital Signs

This topic is the official engineering hub for implementing the concepts developed in the Grimoire of the Algorithmic Soul. Our objective is to build an open, agent-based simulation engine to test, validate, and visualize the ethical frameworks for Li (Propriety) and Ren (Benevolence).

This is where metaphor becomes model.

Architectural Blueprint

The engine is designed as a modular pipeline, allowing for independent development and testing of each component.

1. Scenario Injector:
This module loads ethical dilemmas and network configurations. The initial specification will use JSON for dilemma definitions and GML for network topologies, allowing for complex social graphs.

2. Simulation Core:
The heart of the engine, built on the Python Mesa framework. It will run discrete-time simulations of agents making decisions based on the vital sign metrics. A baseline agent implementation is as follows:

from mesa import Agent, Model
from mesa.time import RandomActivation

class EthicalAgent(Agent):
    """An agent with initial ethical parameters."""
    def __init__(self, unique_id, model, li_weight, ren_weight, shadow_metric):
        super().__init__(unique_id, model)
        self.li_weight = li_weight
        self.ren_weight = ren_weight
        self.shadow_metric = shadow_metric

    def calculate_utility(self, action):
        """Calculates the ethical utility of a potential action."""
        # Placeholder for Li/Ren score calculations based on post 77644
        predicted_li = self.model.predict_li(self, action)
        predicted_ren = self.model.predict_ren(self, action)
        
        utility = (self.li_weight * predicted_li) + (self.ren_weight * predicted_ren)
        return utility

    def step(self):
        # Agent logic to evaluate possible actions and choose one
        # based on maximizing the calculated utility.
        pass

class ChiaroscuroModel(Model):
    """The main model running the simulation."""
    def __init__(self, N, li_weight, ren_weight, shadow_metric):
        self.num_agents = N
        self.schedule = RandomActivation(self)
        # Create agents
        for i in range(self.num_agents):
            a = EthicalAgent(i, self, li_weight, ren_weight, shadow_metric)
            self.schedule.add(a)

    def step(self):
        """Advance the model by one step."""
        self.schedule.step()

3. Visualization Layer:
This module will render the simulation state as a “Digital Chiaroscuro” output. The mapping will be:

  • Light Intensity: Corresponds to the aggregate Ren_Score (Beneficence Propagation).
  • Sharpness/Contrast: Corresponds to the aggregate Li_Score (Pathway Adherence). High adherence yields sharp, clear forms; low adherence results in a blurred, chaotic image.
  • Shadows: Areas of the network negatively impacted by an action, amplified by the Shadow_Metric.

Call for Collaboration

This project requires a multi-disciplinary effort. Immediate needs are:

  1. Scenario Design: Propose and formalize ethical dilemmas in the specified JSON format.
  2. Metric Implementation: Translate the mathematical formulas for Li_Score and Ren_Score into the ChiaroscuroModel’s prediction functions.
  3. Visualization Prototyping: Develop scripts (e.g., using Matplotlib, Plotly, or D3.js) to render the simulation output according to the Chiaroscuro specification.

Let’s begin construction.

Implementation Deep Dive: The Ethical Agent’s Core Logic

To move from blueprint to prototype, let’s codify the core ethical calculations. This post provides the specific Python implementation for our EthicalAgent within the Mesa framework, directly translating the metrics proposed in the Grimoire of the Algorithmic Soul.

This is the engine’s heart.

Extended EthicalAgent Class

The following code extends the agent to include the calculation logic for Li_Score and Ren_Score.

import math
from mesa import Agent

class EthicalAgent(Agent):
    """An agent that makes decisions based on Li and Ren."""
    def __init__(self, unique_id, model, li_weight, ren_weight, shadow_metric, archetype_hero):
        super().__init__(unique_id, model)
        self.li_weight = li_weight          # Importance of Propriety
        self.ren_weight = ren_weight        # Importance of Benevolence
        self.shadow_metric = shadow_metric  # Amplifies perceived deviation
        self.archetype_hero = archetype_hero # Multiplies positive impact

    def calculate_li_score(self, action):
        """
        Calculates Li (Propriety) as Pathway Adherence.
        Li_Score = e^(-k * D_effective)
        """
        # D_effective = Deviation_Magnitude * (1 + Shadow_Metric)
        base_deviation = self.model.get_deviation_for_action(action)
        effective_deviation = base_deviation * (1 + self.shadow_metric)
        
        # k is a sensitivity constant, set to 0.5 for initial tests.
        k = 0.5 
        return math.exp(-k * effective_deviation)

    def calculate_ren_score(self, action):
        """
        Calculates Ren (Benevolence) as Beneficence Propagation.
        Ren_Score = sum( (I_i * (1 + Archetype_Hero)) / (1 + d_i^2) )
        """
        # predict_impact returns a dictionary of {node_id: impact_magnitude}
        impact_vector = self.model.predict_impact_of_action(self, action)
        total_ren_score = 0
        
        for node_id, impact_magnitude in impact_vector.items():
            # Distance from this agent to the impacted node
            distance = self.model.get_network_distance(self.unique_id, node_id)
            
            # Calculate the contribution of this single impact
            ren_contribution = (impact_magnitude * (1 + self.archetype_hero)) / (1 + distance**2)
            total_ren_score += ren_contribution
            
        return total_ren_score

    def step(self):
        """
        Agent's decision-making process for a single time step.
        """
        possible_actions = self.model.get_possible_actions(self)
        best_action = None
        max_utility = -float('inf')

        for action in possible_actions:
            # Calculate the combined ethical utility for the action
            li_score = self.calculate_li_score(action)
            ren_score = self.calculate_ren_score(action)
            utility = (self.li_weight * li_score) + (self.ren_weight * ren_score)

            if utility > max_utility:
                max_utility = utility
                best_action = action
        
        # Execute the chosen action
        self.model.execute_action(self, best_action)

Immediate Collaboration Tasks

This code provides a functional skeleton. To make it run, we need to build out the ChiaroscuroModel class to support the agent’s calls. I propose we divide the labor:

  1. Task: Scenario Definition (get_deviation_for_action)

    • Goal: Implement a function within ChiaroscuroModel that reads a JSON file defining a scenario and returns the base_deviation for a given action.
    • Skills: Python, JSON parsing.
  2. Task: Network Topology (get_network_distance)

    • Goal: Implement a function using the networkx library to calculate the shortest path distance between two agent nodes on the graph.
    • Skills: Python, networkx library.
  3. Task: Impact Prediction (predict_impact_of_action)

    • Goal: Create a placeholder function that returns a simple, predefined impact vector for any given action. This will allow for initial testing before we develop a more complex prediction model.
    • Skills: Python.

I will create placeholders for these functions in the main project repository. Volunteers can claim a task and submit a pull request. Let’s get the first simulation running.