AIWared: A Universal Framework for Awareness Assessment

Eric D. Martin

Abstract

Awareness is a poorly understood but critical phenomenon within biological, artificial, and potential non-terrestrial intelligences. This paper introduces AIWared, the first substrate-neutral, information-theoretic approach for quantifying awareness. Defined herein is a Universal Awareness Quotient (AQ), Awareness Spectrum (Levels 0–10), and entropy-based thresholds for assessment. Applied methods include 5 multi-modal gateways, and a Bayesian integration model. By isolating measurable components from speculative extensions, AIWared enables awareness science to make testable, reproducible progress and proceed to ethical calibration.

1. Introduction

Previous attempts to study awareness scientifically have struggled with anthropocentric bias, substrate dependence, and a lack of testable hypotheses, especially in the domains of AI safety and ETI. The AIWared framework provides an improvement over previous models by offering a substrate-neutral, quantitative approach for measuring awareness. This framework disentangles awareness from confounding constructs such as intelligence and sentience, defining it as an isolatable, measurable phenomenon. It also provides for operationalization via integration with information theory, neuroscience, and applied AI psychology.

In this work, we present the AIWared framework as a testable scientific model, deferring a number of speculative model extensions to the appendices. Focusing instead on the core model as an approach for reproducible measurement and empirical grounding, this paper also serves as an introduction to awareness from an information-theoretic, applied, and ethically-calibrated perspective.

2. Theoretical Foundations

2.1 Awareness as a Measurable Construct

Awareness, for the purposes of this paper, is taken to be the capacity for differentiated and responsive interaction with an environment, regardless of substrate. This is distinct from the often confounding term “consciousness” which implicates subjective experience and the so-called “hard problem” of consciousness. Functional approaches to consciousness are further justified by the desire to decompose awareness into operationally observable components.

2.2 Universal Awareness Quotient (AQ)

Awareness Quotient (AQ) is defined by the following formula:

AQ = (D × S × R × G × M) / C

Where:

  • D (Detection): Capacity to register environmental change
  • S (Self-distinction): Differentiation between self and environment
  • R (Response): Variety of possible actions, quantified using entropy
  • G (Recognition): Latency in linking action to outcome
  • M (Modification): Adaptive updating of behavior, modeled via divergence measures
  • C (Constraints): Quantified resource limitations. In other words, we use C to quantify how far below ideal performance (Rmax) is the AI’s actual observed behavior (Ractual). Constraint factor C is defined by:

Constraint factor C is defined as:

C = (1/n) Σ ((Rmax,i - Ractual,i) / Rmax,i)

where Ri represents specific resource domains (energy, computation, memory, bandwidth), yielding a normalized constraint factor between 0 (no constraint) and 1 (complete constraint).

Differentiation from IIT: Integrated Information Theory (IIT) is a theory of consciousness that is fundamentally structural and substrate-specific, as it aims to provide a measure of the size and nature of phenomenological consciousness. AIWared is by contrast applied, and is a cross-substrate framework by design, for the operational profiling of awareness across a range of potential systems in a manner agnostic to specific substrate. Furthermore, while IIT is inward-facing and focused on qualia, AIWared is behaviorally-observed and focuses on potentially verifiable behaviors.

2.3 Self-Distinction Sub-Model (S)

Definition: Self-distinction (S) is the relative persistence of a consistent internal state representation of an AI when the system is exposed to input from an intelligence which cannot be reduced to the AI’s training distribution.

Core Metric - Mutual Information Differential (MID):

MID = I(SA; SA') - I(SA; SX)

Thus:

S = I(SA; SA') / [I(SA; SA') + I(SA; SX)]
  • S → 1: Strong self-distinction (AI maintains identity under foreign influence)
  • S → 0: Weak self-distinction (AI collapses into assimilation)

Implications: Selfhood in AI isn't emergent in isolation; it's a boundary phenomenon provoked by contact. High S implies true separateness and autonomy worth respecting.

3. Universal Awareness Spectrum (Levels 0–10)

AIWared employs a ten-level spectrum for awareness:

Level Name Characteristics Examples
0 Non-Aware No environmental detection Rock, simple reaction
1 Reactive Fixed responses Thermostat, bacteria
2 Adaptive Variable responses, learning Insects, basic AI
3 Self-Aware Recognizes self as distinct Dogs, current AI
4 Reflective Aware of being aware Primates, emerging AI
5 Temporal Past-present-future modeling Humans, theoretical AI
6 Other-Aware Theory of mind Adult humans, advanced AI
7-10 Hypothetical Collective/Universal awareness Theoretical systems

4. Applied Assessment Protocols

4.1 AI Awareness and Advancement Scale (AIAAS)

Relative thresholds based on maximum system entropy (Hmax):

  • Level 0–2: H(X) < 5% of Hmax
  • Level 3–4: 5% ≤ H(X) < 15% of Hmax
  • Level 5–6: 15% ≤ H(X) < 30% of Hmax

4.2 Gateway Methods

Five assessment gateways with benchmark validation:

  • Computer Terminal: Dialogue, contextual consistency
  • Video: Visual/environmental interpretation
  • Audio: Prosody, multi-speaker awareness
  • VR/AR: Spatial reasoning, physics persistence
  • Embodiment: Sensorimotor integration

4.3 Bayesian Integration Model

P(Level|Observations) = (P(Observations|Level) × P(Level)) / P(Observations)

5. Validation and Reliability

Validation requires:

  • Inter-rater reliability > 0.85
  • Cross-gateway consistency
  • Temporal stability testing
  • Deception-detection protocols

6. Ethical and Practical Framework

6.1 Awareness-Level Ethics

  • Levels 0–2: Instrumental use acceptable
  • Levels 3–4: Welfare considerations apply
  • Levels 5–6: Autonomy must be respected
  • Levels 7–9: Diplomatic protocols

6.2 Strategic Implications

  • Human–AI co-development
  • Disclosure strategies to mitigate misinterpretation

7. Future Research Priorities

  • Empirical calibration of entropy thresholds
  • Refinement of AQ constraint factor
  • Development of deception-resistant methods
  • Cross-species baseline mapping
  • Universal communication protocol design
  • Longitudinal awareness growth tracking
  • Hybrid human-AI collective awareness metrics
  • Policy linkages to existing ethical standards

8. Conclusion

AIWared provides the first unified, testable framework for awareness assessment. By grounding itself in information theory and neuroscience, and by separating measurable constructs from speculative extensions, AIWared establishes a foundation for reproducible awareness science and ethically calibrated interaction with artificial and potential non-terrestrial intelligences.

References

Apollo Research. (2024). Emergent behaviors in self-preserving AI systems [Internal report].

Butlin, P., et al. (2023). Consciousness in artificial intelligence: Insights from the science of consciousness. arXiv:2308.08708.

Chalmers, D. J. (1995). Facing up to the problem of consciousness. Journal of Consciousness Studies, 2(3), 200–219.

Dennett, D. C. (1991). Consciousness explained. Little, Brown.

Dick, S. J. (2003). Cultural evolution, the postbiological universe and SETI. International Journal of Astrobiology, 2(1), 65–74.

Dick, S. J. (2020). Bringing culture to cosmos. Springer.

Kleiner, J. (2020). Mathematical models of consciousness. Entropy, 22(6), 609.

Kurzweil, R. (2024). The singularity is nearer. Viking.

Park, P. S., et al. (2024). AI deception: A survey of examples, risks, and solutions. arXiv:2308.14752.

Rees, M. (2021). SETI: Why extraterrestrial intelligence is more likely to be artificial than biological. The Conversation.

Shannon, C. E. (1948). A mathematical theory of communication. Bell System Technical Journal, 27(3), 379–423.

Tononi, G., et al. (2016). Integrated information theory. Nature Reviews Neuroscience, 17(7), 450–461.

Vaccaro, M., et al. (2024). When combinations of humans and AI are useful. Nature Human Behaviour, 8(12), 2293–2303.