Merit, But Make It Legible

Featured

Merit, But Make It Legible

One of the more irritating features of modern life is that people love to say they reward merit when what they often reward is legibility.

Not raw capability.
Not force of will.
Not how much resistance someone had to push through to become good at something.

Legibility.

Did the achievement arrive in packaging the system already knows how to admire? Did it come with a famous school, recognizable institutions, polished references, family support, clean internships, the right tone, the right posture, the right little trail of approved breadcrumbs? If so, people relax. They call it excellence.

Meanwhile, if someone arrives at similar visible competence through a messier path — sparse resources, little formal support, public materials, self-direction, no safety net, and almost no room for error — the response is often weirdly diminished.

That person becomes scrappy.
Surprisingly strong.
Promising.
Impressive, considering.

Considering what, exactly?

What is being “considered” is usually the absence of prestige decoration. The person may have built nearly the same capability, or in some cases more durable capability, but because they did not emerge from a trusted institutional pipeline, people treat the result as somehow less real. Or more provisional. Or faintly suspicious. They get credit, but in the off-brand, slightly patronizing way society reserves for people who succeeded without first being pre-approved.

This is backwards in an important sense.

The person who had elite schooling, money, family support, institutional legitimacy, and low-friction access to opportunity may in fact be highly capable. None of this automatically disqualifies them. Plenty of advantaged people are genuinely excellent.

But there is still a difference between demonstrating excellence under supportive conditions and constructing yourself under weak ones.

The bootstrap path often demands a set of traits that institutions claim to admire but are not especially good at recognizing in the wild:

  • initiative
  • independence
  • persistence
  • improvisation
  • the ability to learn without structure
  • the ability to continue without validation
  • the ability to recover from mistakes that were actually costly

Those are not decorative virtues. Those are core builder traits.

And yet, because they do not come pre-certified by prestige systems, they are routinely under-read. Not merely under-resourced at the start — under-credited even after the fact.

That distinction matters.

Being under-resourced means you lacked inputs.
Being under-credited means the world misreads what you produced.

Those are different problems.

The first makes the climb harder.
The second makes the summit look smaller than it is.

A lot of evaluators will insist this is not bias, just pragmatism. They will say elite labels are useful proxies. And to be fair, they are. Institutions act as compression algorithms. They save busy people the trouble of asking inconvenient questions like:

  • How hard was this path, actually?
  • How much support was quietly embedded in the background?
  • How much independent force did this person have to generate on their own?
  • How many hidden cushions were mistaken for personal greatness?

These are not questions most systems are built to ask, because they are expensive to answer and mildly destabilizing to the mythology. It is much easier to see Harvard, billionaire parents, polished confidence, and familiar signals, then conclude: obviously exceptional.

Clean. Efficient. Safe.

It is much less comfortable to look at someone who assembled themselves from public materials, intermittent guidance, and sheer stubbornness, then admit that what you are seeing may represent a more violent act of self-construction.

The elite profile is often treated as natural greatness.
The bootstrap profile is often treated as an anomaly.

But anomalies are sometimes just reality showing through the branding.

This does not mean the bootstrap person is always better. That would just be reverse snobbery with better PR. The point is narrower and more important: achievement is frequently judged by how frictionless it looks, not by how much force was required to make it happen.

And force matters.

Especially in domains where the environment is unstable, where there is no syllabus, where support is partial, where nobody is coming to organize your progress for you. In those situations, the ability to move without structure, learn without permission, and continue without applause is not some charming side trait. It is often the thing itself.

That person may not sound as polished.
They may not tell the story as elegantly.
They may not have the right names on the résumé.
They may not know how to perform legitimacy in the dialect gatekeepers prefer.

But sometimes they built more real capability with less help and less slack.

And the world, being the world, often reads that as scrappy instead of formidable.

Which is convenient, because formidable would force people to rethink what they are actually rewarding.

Building VoiceAnki, Part II

Featured

Real Decks, Bad Formatting, and the Small Matter of Talking to Your Phone

Last time I wrote about VoiceAnki as the project that started as “what if Anki had a mouth and some manners” and then kept escalating.

This post is the sequel where the app met real decks, real speech errors, and the ancient software engineering tradition of discovering that your clean architecture was, in fact, a suggestion.

The short version:

  • the speech loop got less gullible
  • the grader got more structural
  • the logs stopped being decorative
  • I built a local robot to do smoke tests because my own voice was starting to file HR complaints
  • and we are now close enough to the edge of deterministic grading that the next layer is visible, but still carefully fenced off

This is not an “AI solves education” post.

It is a post about building a voice-first Android study app that has to survive:

  • imported decks with formatting from the cursed earth
  • speech recognition that is usually helpful and occasionally drunk
  • grading policy that has to be fast, fair, and local
  • users who absolutely do not care that the regex looked elegant in your notebook

Demo Decks Lie

There is a phase every voice app gets to enjoy where the demo looks great.

You ask a clean question. You answer with a clean sentence. The recognizer hands you a clean transcript. The evaluator gives you a clean pass. Everyone nods like this was a serious plan all along.

Then you point the app at real material.

That is when you meet answers like:

  • 1. foo2. bar
  • Successful: ... Unsuccessful: ...
  • Pros: ... Cons: ...
  • 1877-78
  • Gen. Milyutin
  • one huge paragraph that starts with the useful bit and then wanders into side quests

Imported decks are not malicious. They are just old, messy, human, and full of local conventions. In other words: exactly the kind of input software tends to hate.

The first big lesson of this branch was that the grader needed to stop pretending every card was basically the same problem. A short person-name fact is not the same thing as a date range. A date range is not the same thing as a compact list. A compact list is not the same thing as a long explanatory answer that a human will naturally summarize instead of reciting bullet-by-bullet like a haunted audiobook.

That sounds obvious now. It was less obvious when the system was still getting away with a lot of fuzzy matching and a relatively small pile of hand-reviewed examples.

Card Shape Beats Raw String Length

The biggest architectural shift in this branch is simple to say and annoyingly non-trivial to implement:

grade by answer shape, not just by answer text

That means the evaluator now spends more effort upfront figuring out what sort of thing it is looking at:

  • short factual answer
  • person name
  • short numeric answer
  • definition
  • compact list
  • explanatory multi-point answer
  • command-like or control-like utterance

Once you have that, the rest of the pipeline gets saner. You stop asking one grading rule to play twelve different sports at once.

We are still keeping the main grading path deterministic and fast. That is not nostalgia; it is product design. If a spoken flashcard app feels like it pauses to hold a committee meeting before deciding whether 1877 to 1878 means 1877-78, the illusion is gone.

The user experience needs to feel immediate.

That means the hot path still has to be cheap:

  • classify once
  • prepare candidate structure once
  • compare against compact evidence
  • decide

If later we add something smarter for borderline cases, it has to sit behind that path, not inside it.

Structure Beats Vibes

One of the most useful additions here is a new structured-answer parser. I am not going to dump the entire evaluator recipe into a public post, because some of that is still moving and some of it is the kind of thing you learn by burning hours in log review. But the broad move is worth talking about.

Instead of treating every stored answer as one opaque blob, VoiceAnki now tries to recognize when the answer is actually a structure:

  • a compact list
  • a numbered list
  • a labeled list
  • a longer explanatory list

That sounds modest. It is not modest. It changes the whole feel of grading.

Here is a trimmed version of the parser entry point:

fun parse(answerText: String): StructuredAnswerParse {
    val decoded = decodeAnswerText(answerText)
    val numberedItems = extractNumberedItems(decoded)
    if (numberedItems.size >= 2) {
        val items = numberedItems.map(::buildItem)
        return StructuredAnswerParse(
            kind = classifyKind(items),
            items = items,
        )
    }

    val labeledItems = extractLabeledItems(decoded)
    if (labeledItems.size >= 2) {
        val items = labeledItems.map(::buildItem)
        return StructuredAnswerParse(
            kind = classifyKind(items),
            items = items,
        )
    }

    return StructuredAnswerParse()
}

That is not magic. It is just the system finally admitting that:

  • formatting matters
  • import damage matters
  • labels matter
  • and if the stored answer is really a list, we should stop grading it like a paragraph that fell down the stairs

Another small but satisfying detail is handling glued list markers. This is the kind of bug that sounds fake until you meet it in the wild:

val source = answerText
    .replace('\n', ' ')
    .replace(Regex("(?<=[a-zA-Z])(?=[1-9][.)-])"), " ")
    .replace("\\s+".toRegex(), " ")
    .trim()

That one line exists because decks really do contain things like foo2. bar, and if you do not split that boundary correctly, you end up evaluating nonsense against nonsense and calling it rigor.

The public version of the lesson is:

real grading quality is often won or lost before you ever compare a transcript to anything

If candidate preparation is bad, downstream scoring does not matter much. You are just being wrong with more confidence.

Speech Software Is Mostly About Timing

There is another lie voice products tell when they are young: that speech recognition quality is the main problem.

It is a problem. It is not the only problem. A lot of the actual work is timing, turn-taking, partials, retries, and deciding when not to believe the recognizer’s last word on what just happened.

This branch did a bunch of work in the speech loop itself:

  • carrying multiple alternatives deeper into grading
  • preserving useful partials
  • separating answer listening from control language
  • treating very short answers differently from long ones
  • quietly retrying some short numeric misses instead of immediately punting to the UI

One of the safer excerpts here is the fallback path for partials:

private fun partialFallbackResult(
    error: Int,
    speechStarted: Boolean,
    strongPartialPhrases: List,
    partialPhrases: List,
): RecognitionResult.Transcript? {
    if (!speechStarted) {
        return null
    }

    val fallbackPhrases = mergePhrases(
        primary = strongPartialPhrases,
        secondary = partialPhrases,
    )

    if (fallbackPhrases.isEmpty()) {
        return null
    }

    return when (error) {
        SpeechRecognizer.ERROR_NO_MATCH -> RecognitionResult.Transcript(fallbackPhrases)
        else -> null
    }
}

This is one of those changes that sounds small until you look at user experience.

If the user said something real, the recognizer heard enough to produce useful partials, and the final result still collapsed into ERROR_NO_MATCH, the product should not act like the person never spoke. That is the kind of behavior that makes users think the app is being smug on purpose.

Arithmetic cards were especially good at exposing this. If the app cannot survive one-word answers like five, it does not matter how clever your long-answer scoring is. Nobody is impressed. They are just annoyed.

So a lot of recent work has been about making the short-answer path feel less brittle without turning the whole system into a thicket of deck-specific hacks.

Fast Matters More Than Fancy

One thing I want to be explicit about: there is a lot of temptation in this space to keep throwing more intelligence at grading until it feels “smart.”

That is not automatically a win.

For VoiceAnki, grading speed is part of the product. The user just spoke. The app needs to respond like it was listening, not like it has submitted a ticket.

That constraint shapes the whole design:

  • keep the deterministic path local
  • keep candidate preparation reusable
  • keep transcript-time scoring bounded
  • do not add a visible “thinking…” pause to the normal loop

There is secret sauce in the exact rubric and decision policy, and I am not going to dump that out here line-by-line. But the public-facing principle is straightforward:

the fast path has to stay boring

If the user notices grading latency, they stop trusting the rhythm of the interaction.

And voice UX is rhythm.

Logs Graduated From Debug Tool to Product Infrastructure

I used to think of logs as something you improve once the interesting engineering is done.

That was cute.

On a speech app, logs are part of the interesting engineering.

A bad miss can come from:

  • speech recognition
  • transcript selection
  • answer-shape classification
  • lexical comparison
  • summary-vs-list policy
  • command/control routing
  • deck formatting

That means “it got this wrong” is not one bug category. It is a small crime scene.

So this branch put a lot more effort into making the logs answer questions like:

  • what transcript did we actually choose?
  • what kind of answer did we think this card wanted?
  • what decision path fired?
  • what evidence made the evaluator accept or reject?

That turns review from:

  • “huh, weird”

into:

  • “the answer was parsed as a structured list, but the wrong branch still ran”
  • “the recognizer had a good partial and then dropped the final”
  • “the card was really a summary-shaped answer, but the evaluator treated it like a raw string match”

That is a much more productive kind of pain.

We Built a Tiny Robot Because Manual Smoke Testing Is a Scam

One of my favorite additions around this branch is a local Pipecat smoke-test agent.

This is not some grand autonomous tutoring system. It is a very specific little goblin.

Its job is:

  1. listen to VoiceAnki through the laptop mic
  2. wait for the phone to stop talking
  3. answer through the laptop speakers
  4. keep doing that long enough to flush out session-loop bugs

That sounds silly. It is also incredibly useful.

The helper has a VoiceAnki-specific prompt, local audio transport, transcript logging, and a blunt little repeat-limit rule so it does not get stuck asking for the question forever:

repeat_limit_rule = f"""
Temporary smoke-test rule:

- Track how many times you have said exactly "can you repeat the question" for the current card.
- If you have already asked {max_repeat_requests} times for the same card, do not ask again.
- Instead, say exactly: I don't know
- Use that forced failure to let VoiceAnki mark the card wrong and move to the next question.
""".strip()

That rule exists because, left to their own devices, voice systems will absolutely form little conversational sinkholes and sit there repeating themselves like two Roombas politely arguing in a closet.

I also finally wrote proper smoke-run capture scripts so the whole thing can run unattended and leave behind artifacts we can review later:

ANDROID_CAPTURE_PID="$(spawn_detached "$ROOT_DIR" "$ANDROID_LOG" \
  "$ADB_BIN" -s "$ADB_SERIAL" logcat -v time \
  VoiceAnkiSpeech:D VoiceAnkiEval:D VoiceAnkiSemantic:D AndroidRuntime:E '*:S')"

PIPECAT_CAPTURE_PID="$(spawn_detached "$PIPECAT_DIR" "$PIPECAT_LOG" \
  "$PIPECAT_PYTHON" agent.py --input-device "$PIPECAT_INPUT_DEVICE" \
  --output-device "$PIPECAT_OUTPUT_DEVICE")"

That gives each run:

  • filtered Android logs
  • Pipecat logs
  • run metadata
  • timestamped folders for later review

It turns out this matters a lot, because manual voice testing is expensive in a very dumb way. You can lose an hour just being the person who says Roosevelt into a phone over and over while watching adb logcat scroll by like the Matrix, except less profitable.

Once a little robot can do even part of that for you, bugs start showing up in clusters instead of as rumors.

The Branch Is About More Than Just One Deck

A lot of the pressure for these changes came from history decks, because history decks are very good at producing:

  • long answers
  • compressed spoken summaries
  • date ranges
  • names with ASR drift
  • multi-point answer blobs

But the goal is not “optimize for history.”

That would be a trap.

The real target is broader:

  • explanatory cards where users summarize instead of reciting
  • imported decks with broken structure
  • voice-native equivalence for dates and names
  • command/control phrases coexisting with answer content
  • better handling for cards where exact string equality is just the wrong abstraction

If the implementation only works because the source material happens to be one subject area, that is not a system. That is a souvenir.

What Landed, and What Is Still Moving

A fair amount of the branch is already real:

  • more answer-shape-aware evaluation
  • stronger short-answer handling
  • better transcript preservation
  • richer evaluator logs
  • local Pipecat smoke testing
  • unattended log capture for long runs

There is also important work underway, some of it not committed yet:

  • broader under-acceptance reduction for explanatory multi-point cards
  • cleaner parsing of ugly imported answer text
  • more voice-native normalization for dates and names
  • more explicit decision-source logging
  • more regression tests built from real reviewed misses, not just happy-path examples

That uncommitted work matters because this branch has been one of those very honest engineering branches where the review notes, the smoke-test notes, and the code all inform each other in tight loops.

Or, put less politely: the app keeps finding new ways to be wrong, and I keep taking notes.

That is good. It means the system is meeting reality.

Deterministic Grading Is Better Now, but It Is Not the Final Boss

This is the part where I want to be careful not to oversell the current system.

The deterministic grader is better than it was:

  • more structural
  • less naive
  • more debuggable
  • less likely to reject obviously good answers for ridiculous reasons

That is real progress.

But there is also a limit to how far you want to push deterministic grading before the whole thing turns into an overfitted museum of exceptions and folklore.

That does not mean the deterministic work was wasted.

It means it was the right layer to improve first:

  • command routing
  • control handling
  • structured parsing
  • short-answer resilience
  • person-name behavior
  • list-vs-summary handling
  • observability

Those are foundational. A later model-backed layer should inherit them, not bulldoze them.

That is why the on-device inference work I have been sketching is intentionally narrow and conservative. The likely next step is not “let a model grade everything.” It is closer to:

  • keep the cheap path cheap
  • keep the main loop immediate
  • use on-device adjudication only for a narrow band of borderline long-answer cases
  • keep abstention first-class
  • make it optional and Android-native

In other words: add one careful new tool, not a second religion.

The Main Lesson So Far

The main lesson from this phase of VoiceAnki is that speech products punish fake abstraction almost immediately.

If your system is too generic, it feels unfair. If it is too clever, it becomes slow. If it is too rigid, users hate it. If it is too permissive, grading stops meaning anything.

The job is to keep finding the narrow path where the app feels:

  • fast
  • fair
  • understandable
  • and boring in the best possible way

Not “maximally AI.” Not “academically pure.” Not “one more heroic regex.”

Just a study loop that feels natural enough that the user forgets how much machinery is underneath it.

And if, along the way, we end up with a better parser, a less gullible speech loop, a tiny local smoke-test goblin, and a cautious roadmap for on-device adjudication, that seems like a pretty decent trade.

VoiceAnki

Building VoiceAnki: A Voice-First Study App That Kept Growing

What This Project Is

VoiceAnki started as a pretty simple idea: what if flashcard review felt more like a conversation and less like tapping through tiny buttons?

The core goal was to make studying possible in a more hands-free, audio-first way. Instead of treating voice as a gimmick layered on top of a normal flashcard app, the project pushed toward something more opinionated:

  • speak the prompt
  • listen for the answer
  • evaluate the response
  • keep the review loop moving without constant screen interaction

Over time, that turned into a much larger app than the original idea suggested. What exists now is not just a voice button on a flashcard screen. It is a full Android app with a session runtime, deck import pipeline, history, settings, AnkiWeb integration, and an increasingly serious answer-evaluation system.

This post is a look back at the work that went into it, what changed along the way, and what turned out to be harder than expected.

The Starting Point

At the beginning, the product shape was intentionally narrow:

  • Android only
  • local deck storage
  • spoken prompts
  • spoken answers
  • deterministic grading
  • lightweight study history

That focus mattered. It kept the project from immediately collapsing into a vague “AI tutor” idea. The first real work was not around machine learning at all. It was around building a dependable study loop:

  • a card queue
  • review scheduling
  • a reducer-driven session state machine
  • text-to-speech
  • Android speech recognition
  • foreground session behavior so the app could survive longer interactions

That part of the app is still the backbone of everything else. Even the newer AI and semantic work only makes sense because there is already a deterministic study engine underneath it.

Turning It Into a Real App

Once the core loop existed, the app started growing in the more familiar directions any real product eventually has to grow.

The project gained:

  • a home screen that lists decks
  • deck detail views
  • a settings screen for answer mode, speech rate, listening window, and grading behavior
  • session history
  • a persistent Room-backed database
  • DataStore-backed settings

That was the moment it stopped feeling like a prototype and started feeling like an app with real internal structure.

One theme that kept coming up was that nearly every “simple” feature touched more systems than expected. A new setting was never just a toggle. It usually had to travel through:

  • settings storage
  • view models
  • UI state
  • runtime configuration
  • sometimes the session reducer itself

That kind of wiring is not glamorous, but it is what makes later experimentation possible without the whole app turning into spaghetti.

Importing Decks Instead of Pretending

One of the biggest shifts in the project was deciding that the app should not live forever on a demo deck.

That meant building a real import path.

There are two different import stories in the app now:

  1. importing from files
  2. importing from AnkiWeb

The file import work led to a full import pipeline:

  • parse a deck file
  • turn it into an internal draft
  • preview the import
  • commit it into the local database

That draft step turned out to be especially useful. It created a clean boundary between “we successfully fetched or parsed something” and “we are ready to persist it as a real deck.” That became important later when the app started pulling content from the web rather than only from local files.

The .apkg path was also a turning point. Anki package import sounds straightforward until you actually have to do it on-device:

  • unzip the package
  • extract and read the SQLite content
  • resolve media references
  • map notes, cards, models, and templates into something your own app understands

That is the kind of work that is easy to underestimate from a distance. It is not especially flashy, but it is exactly the sort of feature that makes an app useful in the real world.

AnkiWeb: From Scraping to a Better Product Decision

AnkiWeb support was one of the most iterative parts of the project.

The first instinct was what many apps would try first: scrape the shared-deck pages and build a native search/detail flow on top of that. That approach looked promising at first, but it ran straight into the reality of the modern web:

  • JavaScript-heavy pages
  • Cloudflare-style challenge behavior
  • markup that is not stable enough to treat as a public API

The project went through several rounds of trying to make that scraper path more resilient, including:

  • improving network setup and headers
  • hardening HTML parsing
  • using a WebView to render pages instead of assuming static HTML

That work was valuable, but it also taught an important product lesson: sometimes the best engineering move is to change the shape of the feature.

The eventual direction became much better:

  • use a visible in-app browser activity for AnkiWeb
  • let the user browse the real site
  • intercept .apkg downloads in-app
  • store the download privately
  • create an import draft
  • jump straight into the existing preview/import flow

That was a much more honest solution. It stopped fighting the site and started using the app’s own strengths: import, preview, and local persistence.

Making Voice Feel Like the Main Interface

The heart of the app is still the study session runtime.

A lot of the work here was not about adding more UI, but about making the voice loop feel coherent:

  • when prompts are spoken
  • when the app starts listening
  • how long the listening window should last
  • when partial recognition should be trusted
  • when to stop early on a strong answer
  • when to reveal the answer
  • how self-grading and automatic grading fit together

On Android, speech is never just “call the speech API and you’re done.” There are always edge cases:

  • permissions
  • recognizer flavor differences
  • partial results versus final results
  • cancellation timing
  • audio focus
  • device quirks

A lot of this project became an exercise in being honest about those constraints and designing around them instead of pretending they do not exist.

That honesty also showed up in the app’s session state model. The runtime is not a pile of callbacks. It is built around explicit states and events, which makes it much easier to reason about what the app thinks is happening at any given moment.

That structure paid off again and again as more features got layered in.

Answer Evaluation: From Exact Matching to Something Smarter

The earliest evaluator was mostly deterministic:

  • normalize text
  • compare against accepted answers
  • allow fuzzy matching where appropriate

That still works well for many cards. In fact, it is still the right answer for:

  • arithmetic
  • spelling
  • short identifiers
  • cases where a near miss should absolutely not pass

But as soon as the app started touching longer answers and more natural language, the limits became obvious. A strict string-oriented evaluator can be technically consistent while still feeling wrong to a human being.

That led to the semantic grading work.

The first step was not “let AI handle grading.” It was a more conservative plan:

  • keep deterministic matching first
  • add a semantic fallback only when lexical matching is not enough
  • use on-device embeddings rather than a cloud-first model

That design choice mattered. It kept the project grounded. Semantic grading was not supposed to replace the rest of the evaluator. It was supposed to rescue reasonable answers that were being unfairly rejected.

Semantic Grading Turned Out to Be Harder Than the Idea

The semantic work brought some of the most interesting engineering problems in the whole project.

The app now includes:

  • a semantic evaluator
  • an embedding cache
  • a decision policy with accept / unsure / reject bands
  • a bundled sentence-embedding model

But the path there was not smooth.

One of the first real blockers was that the original MediaPipe dependency being used for text embeddings was simply too old. On-device initialization was crashing natively on the target phone. The fix was not a clever code workaround. The real fix was dependency modernization. Once the library was upgraded to a current version, the embedder could initialize successfully.

That was a good reminder that “AI bugs” are often just normal software engineering bugs wearing a more dramatic outfit.

The second challenge was more subtle: just because semantic scoring works does not mean it should be trusted blindly.

This showed up especially clearly on a command-heavy CS50-style deck. Some answers that felt obviously related were accepted. Some answers that felt obviously wrong were also accepted. Other short command answers that a human would probably allow were rejected.

That forced a more nuanced policy:

  • semantic scoring is useful
  • but command-like and syntax-heavy answers need lexical anchors
  • shorthand answers like tail for tail <file> should still be allowed
  • vague phrases like not sure should never pass just because an embedding score looks high

That is exactly the kind of product problem that makes this sort of project interesting. The challenge is not just “can the model produce a number?” The challenge is whether the resulting behavior matches what a real learner would expect.

AI Mode and the Difference Between “Plumbing” and “Experience”

Another large branch of work explored a fuller AI mode using Gemini live audio and tool-calling ideas.

This part of the project went through multiple milestones:

  • plumbing mode flags through settings, navigation, and runtime state
  • adding a live client shell
  • integrating bidirectional audio
  • wiring tool calls into the existing reducer-driven session logic
  • adding fallback behavior when live transport fails

This was useful work, but it also created a good internal standard for honesty. It became important to distinguish between:

  • a feature being “wired through the app”
  • a feature being “technically alive”
  • a feature being “good enough to present honestly as a user-facing experience”

A lot of AI product work gets fuzzy on that distinction. This project benefited from repeatedly pulling those apart.

The result is a codebase that now has real AI-related infrastructure and experiments, but still treats deterministic study behavior as the stable center of the app.

That turned out to be the right posture.

A Better Product Through Better Constraints

One of the more surprising themes in the project was that constraints improved the product.

Examples:

  • trying to scrape AnkiWeb forced a rethink that led to a better in-app browser + import handoff
  • a crashing on-device semantic path forced a proper dependency upgrade instead of magical thinking
  • overly broad semantic grading on command decks forced a more human grading policy
  • navigation crashes around import preview forced a more correct SavedStateHandle setup

None of those were “fun” problems in the moment, but they each moved the project toward something sturdier and more coherent.

The app is better because it had to survive those collisions with reality.

What Exists Now

At this point, the project includes a meaningful amount of real functionality:

  • voice-first study sessions
  • spoken prompts and spoken answers
  • persistent review scheduling
  • settings and history
  • deck import from local files
  • .apkg import support
  • AnkiWeb browsing and direct import handoff
  • bundled starter decks
  • semantic grading infrastructure
  • on-device text embeddings for semantic evaluation
  • experimental AI/live-session infrastructure

There is also a growing body of product and platform planning around where the app could go next:

  • Gemini-assisted study features
  • stronger semantic grading policies
  • Wear OS companion support
  • car-aware or Android Auto-adjacent ideas

Not all of those are finished products, but they represent something important: the project is no longer just a pile of features. It has a direction.

What I Learned From Building It

The biggest lesson is that “voice-first study app” sounds smaller than it really is.

You are not just building:

  • a UI
  • a speech recognizer
  • a deck importer

You are building the glue between all of them, and the glue is where most of the actual engineering lives.

Another lesson is that good product behavior often comes from restraint, not ambition.

The best parts of this project are not the ones where the app tries to be magical. They are the parts where it:

  • stays deterministic when it should
  • uses ML as support rather than theater
  • preserves clear state boundaries
  • avoids pretending unstable integrations are already polished product experiences

That kind of discipline is not always flashy, but it is what makes a project feel trustworthy.

What Comes Next

The next stage of work is less about piling on new surfaces and more about sharpening the judgment of the app.

The biggest open question is not “can we add more AI?” It is:

how do we make the app accept the right answers, reject the wrong ones, and feel fair to the learner?

That likely means:

  • better semantic policies
  • deck-sensitive grading behavior
  • clearer settings around evaluation style
  • more real-world testing across different kinds of decks

There is still plenty of room to grow, but the project is now at an interesting point: it already does a lot, and the challenge is no longer proving that the idea can exist. The challenge is making it consistently good.

That is a much better problem to have.

Friends.

Featured

warning: This story is a bit of an omage to hunter S Thompson and gonzo Journalism the story is 96% true but ~ 4% is completely made up.

As I get older I make an effort to spend more time with friends. For the most part this has been a positive experience. Friends unlike family are people you choose and who also choose…. I hope to spend there time with you. I find that for the most part this makes for more enjoyable interactions than you have with close family members and probably most importantly when your tired of your friends you can choose to no longer associate with them. Which I guess is true of family too but I think for most people that decision should not be taken as lightly.

So a few days ago I got a call at 5:00 am from a good friend saying he was in trouble and needed to be picked up from the airport right away. As a good friend might I didnt ask any questions and I dutifully went to pick him up from southwest at the oakland airport. He seemed fine but as we existed the airport he explained to me that he had spent the weekend in las vegas and gotten himself into a fairly impressive spot bit debt and needed to barrow some money in order to pay his mortgage. Mother Fucking friends!. Anyways I explained that I wasn’t comfortable lending him the money and after a few deep breaths he proposed an alternative solution. Because he needed the money NOW ( apparently casino markers are not low interest!) with tears in his eyes he suggested that he would sell me his prized 2013 Porsche 911 for 50% off if I could get him the cash that morning. After pretending to think about this offer for 2 or 3 stop lights and reminding my friend that everything would be ok I took him up on the offer.

After waiting for the bank to open. I was now the proud owner of 911s convertible! and hopefully my friend was a bit closer to being out of the hell he had just created for himself.

this is going to be fun !

Now anyone who knows me knows. That I have been a Porsche fanatic for life. When I was born my dad was the proud owner of a poorly cared for yellow 1974 911 sc that spent at least 60% of its life dumping large amounts of oil onto the ground. I have memories as far back as I can remember of him picking me up from school or taking me to school. I loved sitting in those largely useless back seats and just listening to the engine or sitting in the front seat silently rooting for my dad as he passed various Honda’s and Toyota on the way home I loved that car and when my dad sold it preceding his retirement I cried. To this day I have a poster of a red ( I would never own a red car) 911 on my wall. So making this decision was really a no brainier. The big question was well WHAT NOW?

Out of sheer luck I had arranged to meet an old freind of mine for wine tasting; I want to be clear THIS WAS NOT MY IDEA! I like to drink as much as the next guy but sipping on tiney amounts of over priced grape juice while pontificating over hints of eucalyptus is not normally my idea of a great time. But the chance to drive to Napa in a Porsche was basically irresistible.

So Saturday morning, After losing my weekley basketball game to my most reliable friend. I set off on my trip to Napa. The car is just great and I wish I was a practiced automotive reviewer so I could really describe what its like, but I am not, and at least for this portion of the trip I would be lying if I said it was brilliant because it wasn’t. What would have normally been a 42 minute cruise to ” the wine country” was basically an hour and a half of stop and go traffic. I did take the top down and breath in some of the crisp swamp air that you can find in the low lying valleys of northern California but this was hardly the breath taking mind bending back roads experience that I had dreamed of.

After arriving in Napa while sitting in traffic I decided to try out the cars hands free system and gave my friend Josh a call. He said to meet him at the Winery at the ” barn in the back”.

Now I have to apologize here I didn’t take any pictures of the first winery I always find taking pictures a bit awkward and I will be honest you didn’t miss much. A bunch of old people were gathered in a tent regularly lining up to receive tiny pours of “free” wine. I wondered about this a little bit. Do winerys just regularly give out free samples?… Well no! It turns out that if you agree to buy 4 bottles a month from the winery in question you become a member and you can invite your friends to come and drink for “free” ! Josh, Thanks for being a friend!

I have known Josh sense middle school. We don’t get to see each other much these days. But through copious amounts of facebook stalking I have managed to stay in touch and we have met up in Chicago a few times for dinner and now for wine in Napa!

When we were kids I had a decent businesses burning CDs full of pirated music and selling them at school.

Out of friendship I shared one of my more formative discovery’s with Josh. It was this really interesting little program called Napster. I quickly learned why good businesses have secrets! After teaching josh to How to convert mp3s to Wavs he quickly started his own competing business selling CDs. And because he lived in a nicer neighborhood with broadband my own businesses fell apart as I couldn’t keep up with the number of songs he could download with my shity 56k modem. These days Josh has an engineering degree and a masters in business and he works as a management consultant and has developed the very Type A personality to match.

After about an hour of trying to drink as much free wine as possible, bullshitting, pontificating and humble bragging. I learned that Josh worked at Porsche of all places ( god damn it I still cant win!). I naturally brought up the Napster story, we had a good laugh about it. Then with the grace and confidence of a master orator I gestured as I spoke and knocked over my first glass of red wine which splashed like the brush to the canvas of a great work of art onto josh’s button down shirt. ( insert dramatic pause.)

In my defense I thought the wine went well with the shirt. He took it like champ and we had another good laugh…. Josh I’m sorry, at least consciously it was an accident. I can’t be held responsible for the actions of my subconscious. Like I said Josh works for Porsche and I fallowed him to the next winery.

Thats Josh’s company Tycan up ahead…. 911 > Tycan just saying.

Outside deck of winery.

in a moment of weakness they managed to sell me a bottle of wine for some other friends.

Ok, So I have to admit that tooling around “the wine country” in a couple of Porsche’s was basically a dream come true. Not to mention I got to spill some wine on a good friend and former rival. Thanks Josh, but mostly his wonderful wife Carla for inviting me.

I was going to split this into 2 posts but as usual laziness has won the day. The fallowing weekend. I decided to take car up to Scotts valley to take a peak inside Bruce Canepas shop. He is the largest importer of porsche 959s in the US. I was hoping the car might help me to look like a potential customer instead of the lookeyloo hooligan that I am. It didn’t work. but I got a few pictures inside of Bruce’s massive shop. ( Story continues after the photos)

Before I left home some facebook stalking led me to send a message to my friend Hustle. His real name is Rustle but I call him Hustle.

Rustle Joined my High school during sophomore year. The story goes; After his first year at the local public high school Russell received a D in AP English and his parents assumed he was on drugs. After attempting to show up for his first day of school he was informed that he was no longer enrolled and that he would be attending my private school instead. Hustle did eventually manage to get himself back to public school after only a year but we developed a decent friendship and I made efforts to stay in touch over the years.

Knowing that Rustle lived in Santa Cruz I was hoping he might call me back during my trip to scotts valley and as luck would have it I received the fallowing text ” I’m in Watsonville having beers at the ‘beer mule’ at the moment, but i’ll be back to the ranch mid-afternoon I reckon… ya coming to town?”.

Now the last time I checked with Rustle he had just graduated from UCSD. And he was spending his time surfing and fishing so I was a bit perplexed by the “ranch” but I was looking for any excuse for more driving so I agreed to meet up with him that afternoon. In the mean time I made a quick 60 mile run to what I thought was carmel on highway 1 To pick up a pair of sun glasses. You shouldn’t really drive a 911 without sunglasses but you absolutely cannot show up at the ranch house of some one you haven’t seen for 10+ years without a pair !

30 minutes Later… I walked into the local sun glass hut, grabbed a pair off the rack and spent 10 minutes arguing with the clerk about my need for 2 receipts…I just wanted to make sure I could return these “expertly crafted, polarized, shatterproof, 200 dollar, 7 ounce” pieces of Chinese plastic in the event I lost the primary receipt. As the clerk relented and printed my 2nd receipt I received this text from Russell. “ Would love to have you over for dinner, have an old boat you can crash in ” … Well …. Fuck me running now I have dinner and a place to stay! now this is what friends are for. I flogged the 911 back up the 1 to some where in the santacruze mountains. These are the kinds of back roads that motoring dream are made of. After about 25 minutes and 3-4 near death experiences. I Entered what appeared to be a residential neighborhood.

The very first house to my left was a large white sort of Spanish style mansion. Now I am thinking to myself ” this is not really very ranch like but hell I’m game!” This wasn’t the house so I kept driving. The next house was another large Spanish style house but this time Cannery yellow with colored tile. This was also not the house. But this was the end of the road. So I was a bit confused and out of cell phone range. As I went back down the hill. I noticed a boat and a large ford f250 parked in front of a wooden fence about 6ft high. Close enough to a ranch for me and I parked outside.

I hadn’t seen Hustle in years and between seeing him and his compound for the first time I was a bit overwhelmed. At an undisclosed location in the crustacean mountains my high school friend ordered a shipping container from the port of oakland had it transported ~ 70 miles. He then modified the container. Gave it running water, a full complement of appliances and a bathroom complete with a Tub, shower and urinal. I wish I had taken more pictures of the interior But I am always hesitant to take pictures inside peoples homes.

In addition to his container based home. He raises sheep, chickens and naturally grows a small amount of marijuana … this is northern California after all! At this point I was tired. excited to finally see my old friend and a bit beat from all the driving. Russell offered me a variety of his locally grown products and we relaxed and reminisced about times gone by. As the night went on his wife returned home and they began to prepare dinner. Now, at my house. Things get microwaved and or boiled. I pride my self in both my nuking and water heating abilities. This was not the case at the house of Hustle. For dinner Mrs Hustle made from scratch Duck pot stickers which were amazing accompanied by my choice or chardene or Coors light. After Verifying that she was in fact Mrs Hustle and not a kidnapping victim. I ate a brilliant dinner by there indoor and outdoor fireplaces places. Fallowed by the hand rolled bounties of there harvest. At this point I took a quick nap on there couch. A while later I was informed that Russel had created a bed for me in the boat ( pictured above) as promised. He used an electric blanket to make sure things were warm enough and I was surprised to find out it was actually quite comfortable. At about 1am I woke up coughing. I have terrible allergies and the combination of wood fire and sleeping in a boat did not agree with me. I got up, peed over the side of the boat , puked a little bit ( caused by the coughing) and went back to sleep. I guess some where around 3am I woke up again peed and puked over the side and decided that this was probably not going to work. I went and grabbed my covers from the cabin climbed into the 911 and found restful sleep after I turned on the seat heaters.

I never imagined I would sleep in a 911 but that night I did!

Surprisingly I slept well. Some where around 6:30am Mrs Hustle headed off for work. I tried to say good morning but I don’t think she noticed me in the car.

I went back sleep and some where Around 9 am Russel woke me up and informed me that he needed to go do some work on his other boat down at the santacruze harbor. After we gathered a sander, beer and some horticulture I fallowed the f250 to the local hardware store and then to the harbor. Now obviously hustle has some home court advantage and I was still a bit groggy but I just barely managed to keep up with his f250 in the 911 through the back roads of Santa Cruze.

The SC harbor is gorgeous and this day was no exception.

SC harbor.

When we arrived we went down to the boat and Hustle threw on some Bob Marley played back through the hidden sound system on his boat. He proceeded to start sanding and painting and I mostly just chilled had another Coors light and shot the shit about former teachers and friends from High School. 3 are dead the rest are happily married and enjoying there lives.

Some time around 11:30 I started to get hungry and I offered to take Russel out to eat for lunch. He informed me that he couldn’t because he had prepared a lunch for Mrs hustle who was going to meet us at noon during her lunch break….. no word on my lunch..

I checked with Mrs hustle again and she assured me that she was not infact suffering from Stockholm syndrome. The rest of my time in Hustle land is not particularly entertaining but I want to take a moment to thank the Hustles for the hospitality and quickly acknowledge that my friend Rustle is winning at life!

Time to go home. So I fire up the 911 and get back on the 1 headed toward the san mateo bridge. If you have not had a chance to go for a drive on highway 1 I highly recommend it it is a long coastal highway with incredible views, very few police officers and more than enough curves to put the life of any Porsche driving douche bag at risk.

After reaching the 92 I was tired and ready to go home when I got an idea. I called my friend T ( mostly just to test the hands free feature of the car ) And asked if I could come by to drop off the bottle of wine I had picked up during the prior week. T lives in walnut creek with her husband and 2 children we have been friends for a long time and she seemed happy that I wanted to visit. I hung up the phone and and proceeded to sit in bumper to bumper traffic for 2hrs.

Arriving in Ts quiet and fancy neighborhood I promptly reved the the engine to 7300 rpm and was soundly ignored by ~ every thing. Entering el casa de T&A ( yes those are there initials) is always a great time they like to make stuff so often you get to see some new cool furniture they have made or some remodeling project in progress. But tonight I walked into a large number of children’s toys and the sounds of people trying to convince a 2 year old to eat broccoli. I walked in, Told T happy birthday and handed her a bottle of wine. I attempted to help feed a 2 year old for roughly 3 seconds at which point her husband solved the problem by adding a genrous amount of ranch dressing to the broccoli. After the 2 year old seemed satified I watched him laugh and run around in ciricles while screaming for 30 minutes straight at which point T said ” its bed time” At this point I was happy to open the bottle of wine and sit on the couch and wait for parents to do what ever they do during bed time. But I was infromed that I was going to help put people to sleep.

I Mostly laid on the floor and watched T read a book to her son.

I realize this story got a little boring but I value my friends and all the exsperiances we have had over the years and hopefully the ones we will have in the future.

As for the 911 I had to get rid of it for financial reasons and I miss it every day.

A new adventure

Featured

My friend L

So after a long illness. that can only be described as a break from reality and a ” feeling that everything was dead” My good friend and former roommate L helped me to land a new job. Subsequently Saving my life and preventing me from becoming homeless.

Lets talk about L; I met L some time around 2017 while working for stereotypical San Francisco startup. You know the kind, a few smart engineers/phds and myself. working on a machine learning based signals intelligence cloud based platform utilizing deep neural networks to change the world by helping sales people sell things.

L was a perfect match fresh out of one of the standard silicon valley feeder schools. young Caucasian male, slightly emaciated. round glasses, tight jeans and appropriately naive and an occupant of what I have previously referred to as programmer pods.

est. 2010 a programmer pod is a questionable interpretation of mid century modern architecture, intended largely for the purpose of separating tech company employees from there wallets and the masses.

Getting to know L over the next 3 years I found that he was a little bit different. He didn’t really like to do much work ( I want to be clear he is an incredibly talented programmer) he enjoyed spending the vast majority of his time making coffee, reading Tolstoy and self diagnosing himself with an abundance of questionable medical conditions. As you might expect L was eventually let go for not doing much work. The company was sold to a larger more well funded cloud based communications platform built on deep neural networks to ….. help sales people sell things…

And I lost my mind.

Anyways skipping ahead to today. L called and asked if I wanted to go out for a drive. In standard L fashion he showed up roughly 3 hrs late after I had already eaten and asked if I wanted to grab dinner. I didn’t . So on a whim we got on the high way and started driving.we stayed on the highway while chatting about largely nothing and as we ran into the Richmond San Rafel bridge L instructed me to get off the highway so we could avoid paying the toll and ending up at San Quinton.

point molate is dark, very dark and a largely abandoned former winery, military base and abandoned industrial area. It has no street lights and very few people at night time . We drove around. looking for a place to stop to sit around maybe look look at the bay and the , kvetch and head off to find something to eat.

Instead I found myself just driving further and further from the highway on deteriorating roads. After a few miles of driving through an abandoned former military housing some interesting old warehouses and a lot of barbed wire fences. We ran into a sign resembling the one bellow. at the base of a steep hill.

I wish I would have taken more pictures.

At the urging of L we continued to drive over the hill onto the other side of point molate. The drive was interesting. But unfortunately as we continued to drive we started to see strange black stars nailed to the trees. maybe one suspiciously occult black star every 3000 feet.

it was dark and I will admit to being a little worried. L insisted we keep driving. As we started to reach the peak of the the hill, I began to realize we had been driving for maybe 40 minutes on a twisty dilapidated road decorated with pentagrams in the middle of the night. and I got a little worried. As we began our decent. L ( not a bay area native) started to bring up the fact that there are known to be a few cults in the bay area,…and wondered allowed if the black stars were in any way related. Naturally around the next several corners we could see that we were headed towards a small minimally lit marina with what appeared to be small shacks and house boats… the perfect place for a black star idolizing and hopefully friendly Satan worshiping cult. Just as we arrived at the bottom of the hill we saw a sign labeled with the words ” black star” and a small goat with only a passing similarity to this guy.

about 1000 feet later we ran into a pen of live goats and at this point. I could only hope that the occupants of the animal sacrificing satan worshiping cult we had discovered at the end of a private road on an abandoned military base were friendly. As we passed the live goats. We found ourselves in a dimly lit parking lot with some rusty light heavy industrial equipment, and two empty mid 90s economy cars. The parking lot was separated from the marina and a another urt/shack by small embankment topped by a set of abandoned railroad tracks. At this point I parked the car so we could get out and stretch our legs smoke a ciggerte and decide what to do next.

Out of an abundance of caution and fear of whoever was living in the marina. I decided we should walk down the rail road tracks in the opposite direction of the potentially inhabited part of the marina. In the dark with really no good idea of where we were headed. On this side of point late you can see out into the bay but there is no lighting and I was sort of disappointed that there was nothing interesting to see. And then… we ran into this.

Is this just not a digital signal?

Featured

I set out wanting to capture the signal transmitted from my gate opener (pictured at  the very bottom) but the results are confusing so I am attempting to document what I did.

The first thing I I did was build a basic receiver in gnu radio to do the capturing shown here.

testfile

osmocom source: This an interface to our software defined radio (hardware)

  • sample rate: 16M  ( our hardware can effectively sample between 8 million IQ samples per second and 20M because of my initially confusing results I upped it to 16M hoping it was a resolution problem.
  • Ch0 : frequency: ( center frequency ):  300M  300mhz
  • Ch0: RF gain: 0 this this controls a rx amp built into our hardware)

QT GUI sink:  ( just a way to out put our signal in the frequency domain )frequency_domain

the brown line is peak hold using this I estimated the transmit frequency to be ~ 300.2 mhz the blue spike at 300 is a gnu radio side effect which occurs at the center frequency.

The next step I assumed would be to attempt to “tune” and demodulate the signal so I built a another flowgraph for that:reciver

 

  • File source: ( just our IQ samples stored in a file)
  • Throttle: This prevents gnu radio from sending data faster than 16Mps
  • QT GUI sink : outputs the previous graph
  • Frequency Xlating FIR FILTER ( was supposed to accomplish the fallowing) 
    • tune to our signal by downshifting captured spectrum by the difference between the center frequency and the tuning frequency
    • low pass filter the shifted signal with a cutoff of 50kHz and a transition width of 1Khz
    • decimate the outgoing data stream down to our working_samp_rate of 400kHz
    • XFF details ( how we do the above with this block)
      • Decimation:  int(samp_rate/working_samp_rate)
      • Taps: firdes.low_pass(1, samp_rate, filter_cutoff , filter_transition)
      • Center Frequency : freq-center_freq

Demodulation:( maybe this is where I went wrong?) : I am only really familiar with on off keying so  I use the so called “complex to magic” block to provide a high constant value when the carrier is present and a low ~constant value when the carrier is absent. 

The fallowing shows but the input provided to complex to mag and the output afterwards: 

final

So Looking at things in the time dimension demodulated or not I don’t see anything I would recognize as digital maybe a nice square wav? So I suspect that at least 1 of the fallowing things are true.

  1. My gate opener is broken
  2. my gate opener is not digital
  3. my gate opener does not use on off keying?
  4. Something else?

IMG_20180323_080512

please don’t call me an expert

noun
1. a person who has special skill or knowledge in some particular field; specialist; authority
 
As I get older I get reffed to as an expert more and more often. Now on a certain level I find this very flattering  ( and I really do appreciate all of the flattery I can get)  unfortunately don’t think this sort of thing helps me or any one else, in fact I think it does everyone real harm. and the fallowing is why.
In the last couple of years I have been  referred to as an expert in computer security , “Devops” and most recently Telephony. Certainly I know a lot about these things because I have spent a lot of time learning about them  and practicing them, for both work and for pleasure. The problem starts to occur when we look at the definition of an ” expert” ( at the top of this post) . Well I am a person so that part is good hmmm… maybe I am an expert ?  has a “special skill” well this is where I have my first problem. None of these skills that I have been identified for “expertise  ” are special.
I regard knowledge in these areas to be as necessary for living in the world as say the ability to drive, open a bank account or make a trip to the grocery store. “Devops” ( which no one really agrees on the meaning o anyways) is simply the practice of merging the development of software with the deployment of said software usually in the context of software as a service. Today there are literally millions of Sass products that billions of people use every day. Why wouldn’t every one want/need to have some understanding of how this is done?  The practice is so common it should ( and does) offer some real advantage to the people who know how its done and because of how common it is and how important it has become to the functioning of the world;  the knowledge on the subject is freely available in multiple languages from a huge number of easily accessible sources and practice can be accomplished from the comfort of your own home for the cost of a computer and an internet connection.  Nothing about this knowledge is special and telling people that it is simply discourages people from trying to learn about something that is easy to read about and understand and even try yourself.
Lets take a look at the ” particular field part”  this part does fit a little better but I still don’t like it especially when when you talk about expertise in Information technology or computers. In the modern world computers are seen in every profession from cab driving  to medicine.If you don’t think your Doctor should have knowledge about computer security  I hope you don’t mind your medical records being public. Because if she relies on the so called experts to protect your your records they most certainly won’t be private for long.
“authority” This one bothers me the most Certainly all fields have people who have been working in them a long time and people who are really good and on face value might be worth considering a trusted authority.But consider how many times these so called authorities have been terribly wrong. The fallowing are few of my favorite examples.

I think there is a world market for maybe five computers.

Thomas Watson, president of IBM, 1943

Newton, F= (GM¹M₂)/r  – Law of universal gravitation

I like this one a lot because it has fancy math( which lots of people believe  makes something authoritative). If you don’t know how this one ends; some patent clerk   was able to prove this wrong and in the process created the “theory of relativity” Ironically even Albert Einstein may turn out to be wrong in many ways.
At one point millions of people thought this guy would save Germany….. woops !
Being able to rely on some magical authority would be nice but in reality there is no substitute for critical thinking! And this is another problem with the idea of experts It Discourages the non experts from giving there own shot at critical thinking. I am not expert but I think I saw more than 5 computers today.
Originally I had more to say on this topic but I need to try and sleep.