This is my summary document for public consumption to better understand where I have **accessibility needs** with auditory processing and how those needs are met.
# How the issues present
I am on the spectrum and have ADHD which presents certain specific challenges that have been measured in a professional setting. In cognitive testing, I score vastly differently encoding verbally delivered information that...
1. 🟢 Is structured, categorical, or list-based (94th Percentile)
2. 🔴 Arrives as a narrative stream that I must parse moment-by-moment (16th Percentile)
This often shows up as seemingly missing memories of verbally delivered information right after it's delivered. But I'm actually missing the moment-to-moment encoding triage system most people do unconsciously.
# Exacerbating factors
- Background noise: Auditory distractions.
- Pace of speech: Processing speed makes fast talkers extremely difficult.
- Information density: Too many instructions at once, long-windedness overwhelms my encoding window.
- Emotional activation: Anxiety or social self-monitoring significantly impacts encoding.
- Lack of structure: Without bullet points or categories, the system that normally organizes information isn't activated.
- No visual anchor: I rank strongly in visual recall, but not auditory.
- Novel information without schemas: New content with no pre-existing frame is harder to encode.
# What helps
**Letting me record meetings with you** is #1. This lets me process after the fact in a way I can't do in the moment the same way you can.
## How you can help
- If you are giving me instructions, break it into steps, pause between steps, repeat key points. Also giving me the summary up front is great.
- Especially where information is dense, just slow down the pacing.
- Anchor information to visual schemas.
- Allow me space to interject with clarifying questions. This helps with the encoding process.
## How I help myself
- **Live transcription**: Counter-intuitively, **if you see me constantly looking down at my phone, it means I'm listening well** because I'm running [Live Captions](https://support.apple.com/guide/iphone/get-live-captions-of-spoken-audio-iphe0990f7bb/ios).
- Writing down the first sentence of what is said.
- Converting narrative input into schematics immediately.
- Repeating back the information but summarized.
- Self-generated cues: Check X after Y happens, timers, alarms, etc.
![[Partials#^eaec46]]