logo

Latest from the team

Fresh Takes & Updates

Keep up with the latest feature rollouts, and updates from the team

All Posts

Team News

Changelog

Android XR AI Glasses emulator

Team News

Feb 18, 2026

SUPER SPEX v1: What We Learned Building AI Glasses for Tradespeople using Android XR

Super Spex are AI glasses for tradespeople: a hands-free interface to Super Engineer (our technical AI), delivered via audio plus a HUD.  The Super Spex concept is to provide all the relevant AI tools a tradesperson might need to do their job better in a handsfree way and hopefully reduce the number of phones accidentally smashed on site!

Our team built and open-sourced a V1 prototype of Super Spex to stress-test Android XR in real tradespeople-based workflows—and to share working patterns with other Android XR developers.  All code & documentation is available via the Super Spex GitHub Repo.

Android XR AI Glasses development team

^ Our Spex team working on honing 'Tea Mode' - the world's first AI-powered tea-maker ;-)

Objectives

To date, the majority of Android XR development has been focused on AR Glasses, so we were entering fairly virgin territory with our focus on AI Glasses - which are distinct in capability from AR-heavy experiences.

We discovered quickly that AI-glasses constraints are very tight / limited, as you’re optimizing for quick capture, low cognitive load, and reliable audio/visual guidance rather than persistent 3D world mapping of AR.

Our 4 core project objectives out of the gate were:

  • Validate hands-free workflows for technicians: the phone is useful but the wrong form factor on site—people drop, smash, and juggle it while working. AI glasses should remove friction while still giving “expert help in the moment.”

  • Learn the Android XR stack by building: pairing, camera/mic access, and projection to a HUD, then document what worked.

  • Create a reusable “modes” framework: composable workflows that map to real tasks (identify equipment, ask for help, capture notes, call for backup).

  • Open-source early: so the wider community can fork proven patterns.

What we’ve delivered so far

Super Spex V1 is an AI-powered glasses companion app built with React Native (Expo) + TypeScript, with a Kotlin native module bridging into Jetpack XR. The architecture is explicitly “phone hub, glasses display.”

AI answers are streamed via Super Engineer, described in the repo as a Gemini-based, multi-agent RAG wrapper grounded in thousands of technical manuals.

The V1 feature set we’ve built out is broken down as follows”

  • XR Glasses Projection: project UI to connected XR glasses (Jetpack XR).

  • AI-Powered Modes: speech, camera capture, tagging, and AI answer streaming.  Also, for fun we created ‘Tea Mode’ (more below).

  • Remote View Streaming: stream the glasses view to a web viewer using Agora RTC (two-way audio/video), with a small token server in cloudflare-workers/.

  • Web demo + cross-platform structure: platform-split files (.web.ts / .ts) so the same codebase can run in the browser.

  • Parking timer: a practical “don’t get ticketed” HUD timer plus phone notifications.

Experiencing the different ‘modes’

The main functionality of Super Spex are delivered via ‘modes’, which are essentially ‘workflows’ of specific Android XR (and third party services like Agora).  Each mode is designed to address a specific Tradesperson’s need:

  • Identify: snap a photo of a piece of equipment to quickly identify what it is, and how it works

  • Help: snap a photo of something and ask Super Engineer for help with it - instructions of how to use it or how to fix it

  • Notes: take video or photo notes of your job with speech live transcribed as the user talks (and videos / photos captured and saved)

  • Live Stream: the user streams live video from the glasses to multiple people via a simple shareable link, and gets live audio back via the glasses.  Remote fixes and instructions made easy.  

  • Parking timer: never get a parking fine again!  Set a timer, and get audio and HUD alerts.

  • Tea maker mode: take a snap of the colour of your colleague’s tea, and let our AI colour match & give you guidance when you next make them a cuppa!  Everyone hates a badly made cup of tea ;-)    

    AI tea-maker for tradesmen

     

For Android XR devs, arguably the most valuable code is the XR bridge and structure in modules/xr-glasses/android/… (connection management, Compose UI for glasses, projection, and streaming), plus the repo’s maintenance docs (XR projection, streaming, camera capture, speech recognition).

Technical challenges (Android XR in practice)

Our V1 prototyping validated the opportunity, and also highlighted the current realities within the XR dev space:

  • Documentation scarcity: getting projection and “device capability access” (camera/mic, HUD output) working took time because references were limited, so we embedded learnings directly in the codebase.

  • Emulator gaps (speech is the headline): the XR emulator lacks Google Services sign-in and on-device speech recognition, forcing backend transcription, which adds latency and a less smooth UX than real hardware.

  • “Connected” doesn’t mean “usable”: pairing via the platform glasses app / Bluetooth is easy; reliably using that pair is where most engineering time goes.

  • Performance/compute constraints: the XR emulator can be RAM-hungry (we saw ~22GB), and hardware acceleration/driver mismatch can cause slowdowns.  We’ve now got good experience of how teams should set up dev rigs.

  • Keeping XR isolated: we leaned on process separation for XR activities (:xr_process) to avoid destabilising React Native.  Developing both at this point is important because of the lack of access to Android XR AI Glasses, but developing in parallel adds overhead. 

What next?

V1 is the “line in the sand”: we’ve created and tested working modes, documented XR patterns, and delivered an open repo to the XR community.  Next steps are about getting from prototype to field-ready:

  • Test on real Android XR Glasses device to specifically understand HUD legibility, audio quality, gesture/tap ergonomics, end-to-end latency.

  • Harden the workflow framework around the long-term product direction (Identify / Help / notes / stream)

  • Improve speech + streaming reliability by tightening local capture paths and making backend fallbacks explicit and observable.

  • Keep shipping in the open: share more XR recipes and “copy/pasteable” patterns for other Android XR teams.

If you’re building on Android XR and want a real, end-to-end reference (XR projection → camera/speech capture → AI answer streaming → remote support), the Super Spex Github repo is a great practical starting point.

We look forward to progressing Super Spex to the next level.  We’re always happy to chat team@superengineer.app  ;-)

Thinking

Feb 8, 2026

The evolution of the Tradesperson. Is the robot the future of Blue Collar work?

Elon Musk recently pivoted Tesla from manufacturing cars to autonomous vehicles & robots.  His vision of automated manual work a thing of the not-too-distant future propping up Tesla's valuation ($1.2 trillion at current time of writing) as car sales decline.

Robots are coming

Current demos of Musk's Optimus robots show them doing basic tasks in controlled environments such as folding shirts and sorting batteries into boxes.

Anecdotally, on Instagram the other day I watched a video of a robot loading wine glasses delicately into a dishwasher - suggesting the dexterity of robots has increased to a significant level.

That said, I've also watched a variety of robot fail videos too, including the now famous Russian robo-fail video.

My broad takeaway is that progress in robotics is impressive, but there's a long way to go.

Importance of World Models

Accelerated progress in autonomous robots is linked to advancements of AI, and in particular World Models.  World Models are distinct from Large Language Models (LLMs) which most of us are familiar with, and power apps like ChatGPT etc.

In simple terms, World Models model the 'physical world intelligence' i.e. the bit of human intelligence that allows us to interact with the real world and do stuff.  If you think of a human's intelligence, we have multiple levels of sophistication that enable us to do what we can do; we're not just a brain that can think and talk, but a physical being that can think, talk, move and interact.

The Large Language Model does the 'thinking' bit, and the World Model does the 'doing' bit - by helping the robot 'imagine before acting'.

But the doing bit is magnitudes more complex to model than the thinking bit.

Autonomous vehicles and low dimension problems

Consider self-driving cars.  It took c.15 years of R&D to produce an autonomous vehicle (worked started in c.2005, and the first Waymo-powered cars took to the streets in San Francisco in c.2020), but if you consider the complexity of the world problem that self-driving cars master it's relatively trivial.

Breaking down the main goal of autonomous vehicles to their core, their main goal is to 'not bump into anything else' - i.e. stop / swerve as quickly as possible when it's understood that there's an object in the way.

As impressive and futuristic as self-driving cars are, they're the low hanging fruit of World AI as they're only solving a very 'low dimension' problem. Low dimension in that the autonomous vehicle is operating in a fairly controlled environment with a single 'not goal' (i.e. don't crash!).

Real world complexity

When you compare an autonomous vehicle to say a Robo Tradesperson, the levels of complexity increase dramatically - in particular in relation to the infinite variables that real world environments pose to a Robo Tradesperson, compared to controlled environments of the autonomous vehicle where the variables are more easily predicted.

Just consider a simple scenario where an tradesperson visits your house to do a maintenance check on your intruder alarm. The engineer needs to:

a) locate the main system panel in the basement (down steep stairs)

b) open the alarm panel box, in cramped conditions (see the photo example below of a real life scenario - in my cellar!)

c) test the system battery amongst a sea of wires

d) then check the sensors in the property, which requires mounting a ladder, and reaching up to unscrew the sensor cover, change the device the battery, and then replace sensor cover etc. etc.

The future of Trades

So, where does this leave the future of Tradespeople?

At this point, I should introduce my Dad - and his work-based obsessions over engineer efficiency.  For the last 7 years I've been helping run my family's security company AMCO Security - installing / servicing intruder alarm systems.

My father has two main pet peeves which he believes are the secret to unlocking untold profitability: 1) Efficiency of engineers on the job (i.e. their ability to know what they're doing) 2) wasted time driving between jobs (i.e. dead time).

The basic economics of AMCO is that two thirds of revenue comes from engineer time.  The more time engineers are doing work for clients, the more money made.  Current weekly engineer productivity analysis indicates utility between 45 to 70%.

Imagine a time where that utility was closer to 100%. This is where Musk's and Super Engineer's world views converge.

AI augmentation not Robo-replacement

Given the magnitude of technical challenge ahead, a realistic prediction is that the Robo Engineer won't be a thing for at least 25 years - twice the time it took to produce self-driving cars.

The future vision of Trades is most likely one where simple / repetitive tasks are automated (e.g. driving), and complex real-world tasks are still carried out by human AI-augmented Tradespeople (i.e. Tradespeople assisted by AI via things like AI wearables - like Super Engineer & Super Spex).

A realistic near-future picture

To break-down the everyday activities of a Tradesperson, the near-future vision will look something like this:

  • Work planning / Briefing: jobs will be planned by an autonomous planner - estimating the most efficient routes and matching relevant workforce skillsets to specific job scenarios.

  • Transportation: this work plan will then be fed to a self-driving van, which will deliver the Tradesperson to their jobs - messaging the customer re: ETAs and unexpected delays, and detouring to Distributor partners to pick up parts / equipment (if necessary).

  • Site visit / manual work: when the Tradesperson reaches site, they'll pop on their Super Spex AI Glasses, and carry out the physical onsite work. The AI will guide the Tradesperson where necessary - acting as their personal assistant.

  • Reporting & communication: via AI Glasses technology (image & audio to text transcription), all work can be easily recorded and reported, and automatically saved / sent to HQ.

  • Trouble-shooting: any technical issue can be answered quickly via Super Engineer's AI - providing immediate specialist technical assistance.

Efficiency gains

Imagine if it was possible to increase engineer efficiency by say 15% (not a crazy increase) - due to AI augmentation speeding up time on site. This efficiency would drop straight through to utility increases (i.e. amount of engineer time spent on paid jobs) of say 10%.

At an average estimated cost of £20 an hour, and charge rate of £50 an hour, a 10% increase in utility of this resource is worth c.£7,000 extra revenue per year per engineer.

£100 billion impact

Consider then the sheer size of the service engineer market, which taking into account trades like electricians, fire / security, HVAC, Facility technicians is 600,000 in the UK, and 3 million in the US alone (estimated at 50 million globally).

Realistic estimates of the market revenue impacts of AI (based on a 10% utility increase, 50% adoption & 60% realisation rate) indicate an additional £1.2 billion revenue in the UK, £5.9 billion in the US - and a global impact of £100 billion.

Compliance and safety

Beyond economic gains, the use of AI in Trades has additional broader impacts linked to compliance and safety. In the UK, all industries require works to be implemented according to British Standards. The practical difficulty with this is the relative complexity of these standards, and the ability of the engineer to remember them - as the standards documents may run across 100's of pages.

Using AI like Super Engineer, British Standards can be directly injected into an engineer's workflows - in a simple / practical way, ensuring the Tradesperson works to the letter of the law.

The future is human

The future of the Tradesperson is clearly a very human-based future, which is hugely positive for the thriving community of Trades.

If you're an electrician, HVAC, fire/security technician, facility manager, then your jobs are safe for another 25+ years.

And excitingly, the timeline for this AI-enhanced vision of Trades is closer than you think:

  • AI Technical assistants are now available via Super Engineer

  • AI Glasses and wearables will be live in the next 12 months via Super Spex

  • Autonomous vans are likely to be available within a 2 to 10 year timeframe (depending on your location) and changes in legislation.

The future for Trades is one of the few areas of employment that will truly thrive in an AI world. For Trades, the future will be a world of AI-enhancements rather than robot domination.

Google AI Glasses demo

Thinking

Feb 1, 2026

How to Develop for Google AI Glasses with Android XR

Google founder Sergey Brin announced the Google Glasses in 2012. For context, this was before Google Drive [2012], the Pixel Phone [2016] and these acquisitions: Motorola, Waze, Nest, Boston Dynamics, Firebase and DeepMind... It's been a long time!

That announcement, posted to Google+ ( an unlucky Kondratiev wave surfer in the Digg cohort ) was titled 'Project Glass: One Day'.

What's possible today

14 years later, Google have now released an SDK to power the next phase of Google Glass - which is AI Glasses, powered by Android XR.

Having been working on ideas around Google's AI Glasses for a while now, we want to share practical insights into what's possible with the tech today.

Understanding Android XR libraries

To get under the hood of these glasses, you first need to understand the Android operating system and which libraries Google has released for development:

Android developers are familiar with using Application Libraries, which Google rebranded as Jetpack at I/O 2018. An example of a primitive Jetpack library is CameraX - which is stored at ⁠ androidx.camera.core.camera ⁠. It is important to make note of the ⁠ androidx ⁠ namespace, as that is where all the libraries live, including XR - the library for integrating with the glasses themselves.

Just like how you use the CameraX library to develop Camera based applications, the XR library is used to develop Glasses based applications.

How to go about Phone to Glasses?

Simple. The Google AI glasses can be reached via one canonical API. This API returns the ⁠ context ⁠ of the glasses - a reference to them, which you can call methods on.

⁠ val glassesContext = ProjectedContext.createProjectedDeviceContext(phoneContext) ⁠

Since we have the context, we can call any context taking libraries on our new variable glassesContext. I've asked Cursor to generate me a subset of the methods we can use, which includes sending data up to sensors, triggering photos from the phone & more.

How to go about Glasses to Phone?

This starts with the Android Manifest file - a file which runs before the program starts. You must link two attributes within it together:

1)`android:requiredDisplayCategory="xr_projected" ``

which tells Android you are connecting to the glasses, you would do similar to tell Android you are building a car app, using an attribute such as ⁠ android:name="androidx.car.app.category.MEDIA" ⁠

2)`android:name=".glasses.GlassesActivity"

which tells Android to link the class name GlassesActivity into a context to the glasses. The GlassesActivity becomes the context, and you can now run any of the APIs that take in context. Lots were shown in the above image, such as accessing the camera & sensors!

Data transfer between devices is handled via ⁠ onReceive ⁠ and ⁠ sendBroadcast ⁠ on a context - which we now know how to create on both the phone & glasses.

Super Spex - coming soon

Now back to building Super Spex!


Ai Glasses for tradesmen dev team

Team News

Jan 26, 2026

Helping Tradesmen stop breaking their phones

Speaking to a builder recently, he told me about the insane number of phones he’d dropped / broken in recent years whilst working on site.

For most tradesmen the phone is a key tool; taking photos to share with colleagues / clients; making notes;  taking measurements; researching parts; finding fixes; taking calls from HQ; giving & taking instructions via video calls.

Form-factor

Tradesmen spend a lot of time on the phone, but in most cases the phone itself is the wrong form-factor.

Tradesmen need to use their hands for tools;  tradesmen have famously fat fingers;  and most Tradesmen are not the biggest fans of typing - as if they were, they’d have got a job in an office.

That’s where the opportunity for AI Glasses as a next generation tool for Tradesmen comes in - as AI Glasses provide a handsfree & voice activated way to access instruction manuals, make notes, find parts, contact HQ,  live-stream calls from site.  

Say hello to Super Spex

And this is what we’re building with Super Spex.

Super Spex are AI Glasses for Tradesmen, giving them all the AI tools needed to do their job better - handsfree.  

Android XR +Super Engineer

From a tech perspective, we’re building Super Spex using Android XR to build a framework of Tradesmen-focused workflows for Google Glasses - all powered by Super Engineer’s AI smarts.

Michael & Dima (photo above!) joined our team recently to lead our Android XR product build, and in parallel we’re working with hardware partners to finesse the design & form-factor.

Partners welcome

If you’re a potential hardware partner, please drop us a note for a chat: spex@superengineer.app

Super Glasses - AR glasses for technicians

Team News

Nov 30, 2025

Super Glasses (AR / AI) Lead - we're hiring!

We're looking for a lead engineer to work on our AR Glasses interface development - Super Glasses.

Currently the AR glasses tech stack (Android XR etc.) is still in its relative infancy, however it's due to move quickly in 2026 and we want Super Engineer to be leaders in workforce AR / AI space with Super Glasses.


Super Glasses vision

Our vision for Super Glasses is to create the ultimate way for field service engineers to get AI assistance as they work - seamlessly experiencing Super Engineer's AI smarts whilst on the job.

Imagine you're an electrician.  You arrive at your first job of the day and you're faced with a piece of equipment - a complex intercom system - you've never work on before.

No fear, you're wearing Super Glasses. One tap on the frame starts 'ID mode' and the Super Glasses immediately use their camera to identify that the intercom is a Fermax Kin - with Super Engineer's AI giving you the answer audibly via an ear piece.

You then double tap, and explain the fault scenario - talking out loud, with Super Glasses microphone listening.  Seconds later you have Super Engineer audibly giving you a step-by-step explanation of how fix the fault via the ear piece -  with an option for visual instructions to appear via Super Glasses' heads-up-display (HUD).

Calmly you work through the fault fix procedure, double checking at any point (with a double tap) with Super Engineer if anything needs checking.

You have no fear with Super Engineer & Super Glasses.

Role detail

Because of the tech-cycle stage we're at with AR Glasses technology, this role is a hybrid role - combining R&D, product design and  engineering.  You'll also need to be hustler, hustling your way into the early OS dev releases & working out what we should be building on the fly - iterating, experimenting & networking.

  • AR Glasses Tech Stack R&D.  You'll need to explore the various different software, firmware & hardware options in the market and evaluate / navigate the best possible product platform path for Super Glasses.

    • Android XR

    • Samsung

    • XReal

    • Even Realities

    • Magic Leap

    • Gentle Monster

    • Warby Parker

    • Kering Eyewear

  • AR product design (inc UX). You'll need to think about the overall product design & UX - especially the AI software into hardware into user perspective. For example, consider different ways a field service engineer could interact with the AI during operational work - whether that's speaking to it (input) or output via audio / heads-up displays (HUD). Considerations of format (length / complexity of instructions) of AI for different media (audio / visual).

  • AI > AR integration systems design. Considering the balance between user need, user / tech adoption and tech sophistication (and reliability). For example, addressing the question of what level of instructions are feasible for a user to follow via HUD - taking into account user & technical considerations.

  • Software development. Choosing an OS, and developing the Super Glasses specific OS. Working various parters / teams, including software and hardware partners & UX designers (for the UX / hardware integration work)

  • Networking & communication. Making friends in the industry. Learning & sharing the Super Engineer love ;-)

How to apply

If you've got the hustle and technical chops to lead our Super Glasses project, drop us an email at team@superengineer.app. Tell us / share us what you want / what you think is relevant to get the job.

Team News

Nov 3, 2025

How Super Engineer fixed a Halloween Candy Chute?!

Last week I started sharing a beta version of Super Engineer (the email interface version) with a few friends to get initial feedback.

Feedback was mostly silence, apart from a random email from my friend Alex at the weekend who'd (in jest) decided to test whether Super Engineer could help fix a malfunctioning Halloween Candy Chute.

Genuinely helpful Halloween help

Amusingly (and amazingly!) Super Engineer delivered the goods - giving genuinely helpful advice on how to fix a giant Halloween Clown sweet dispenser!


Although done in jest, this test was super insightful for us as it helped prove out various edge-case variables we'd built into our AI so Super Engineer works as an all-round AI assistant beyond its core deep technical abilities.


Broadening Super Engineer's knowledge

Unbeknown to my friend, our team had spent all of October re-engineering our AI to deal with a wide range of queries and tasks beyond the likes of fire alarm / HVAC instruction manuals - as we identified that we need to make Super Engineer is to a Trades Person's true assistant - rather than just being a super laser-focused tech nerd.

AI beyond simple RAG

Technically what that's meant is we added in various layers of classifiers, re-routers and data synthesis that extends Super Engineer's AI beyond a simple RAG pipeline - giving Super Engineer true Super Powers - enabling it to deal with complex Fire Alarm queries as well as give Candy Chute fixes.

So, my friend's Halloween jest has given our team's month-long re-engineering work some timely validation. 

Give it a try

We're now encouraging a wider user-base to trial our AI - using our email. Simply email: help@superengineer.app

Load More

All Posts

Android XR AI Glasses emulator

Team News

Feb 18, 2026

SUPER SPEX v1: What We Learned Building AI Glasses for Tradespeople using Android XR

Super Spex are AI glasses for tradespeople: a hands-free interface to Super Engineer (our technical AI), delivered via audio plus a HUD.  The Super Spex concept is to provide all the relevant AI tools a tradesperson might need to do their job better in a handsfree way and hopefully reduce the number of phones accidentally smashed on site!

Our team built and open-sourced a V1 prototype of Super Spex to stress-test Android XR in real tradespeople-based workflows—and to share working patterns with other Android XR developers.  All code & documentation is available via the Super Spex GitHub Repo.

Android XR AI Glasses development team

^ Our Spex team working on honing 'Tea Mode' - the world's first AI-powered tea-maker ;-)

Objectives

To date, the majority of Android XR development has been focused on AR Glasses, so we were entering fairly virgin territory with our focus on AI Glasses - which are distinct in capability from AR-heavy experiences.

We discovered quickly that AI-glasses constraints are very tight / limited, as you’re optimizing for quick capture, low cognitive load, and reliable audio/visual guidance rather than persistent 3D world mapping of AR.

Our 4 core project objectives out of the gate were:

  • Validate hands-free workflows for technicians: the phone is useful but the wrong form factor on site—people drop, smash, and juggle it while working. AI glasses should remove friction while still giving “expert help in the moment.”

  • Learn the Android XR stack by building: pairing, camera/mic access, and projection to a HUD, then document what worked.

  • Create a reusable “modes” framework: composable workflows that map to real tasks (identify equipment, ask for help, capture notes, call for backup).

  • Open-source early: so the wider community can fork proven patterns.

What we’ve delivered so far

Super Spex V1 is an AI-powered glasses companion app built with React Native (Expo) + TypeScript, with a Kotlin native module bridging into Jetpack XR. The architecture is explicitly “phone hub, glasses display.”

AI answers are streamed via Super Engineer, described in the repo as a Gemini-based, multi-agent RAG wrapper grounded in thousands of technical manuals.

The V1 feature set we’ve built out is broken down as follows”

  • XR Glasses Projection: project UI to connected XR glasses (Jetpack XR).

  • AI-Powered Modes: speech, camera capture, tagging, and AI answer streaming.  Also, for fun we created ‘Tea Mode’ (more below).

  • Remote View Streaming: stream the glasses view to a web viewer using Agora RTC (two-way audio/video), with a small token server in cloudflare-workers/.

  • Web demo + cross-platform structure: platform-split files (.web.ts / .ts) so the same codebase can run in the browser.

  • Parking timer: a practical “don’t get ticketed” HUD timer plus phone notifications.

Experiencing the different ‘modes’

The main functionality of Super Spex are delivered via ‘modes’, which are essentially ‘workflows’ of specific Android XR (and third party services like Agora).  Each mode is designed to address a specific Tradesperson’s need:

  • Identify: snap a photo of a piece of equipment to quickly identify what it is, and how it works

  • Help: snap a photo of something and ask Super Engineer for help with it - instructions of how to use it or how to fix it

  • Notes: take video or photo notes of your job with speech live transcribed as the user talks (and videos / photos captured and saved)

  • Live Stream: the user streams live video from the glasses to multiple people via a simple shareable link, and gets live audio back via the glasses.  Remote fixes and instructions made easy.  

  • Parking timer: never get a parking fine again!  Set a timer, and get audio and HUD alerts.

  • Tea maker mode: take a snap of the colour of your colleague’s tea, and let our AI colour match & give you guidance when you next make them a cuppa!  Everyone hates a badly made cup of tea ;-)    

    AI tea-maker for tradesmen

     

For Android XR devs, arguably the most valuable code is the XR bridge and structure in modules/xr-glasses/android/… (connection management, Compose UI for glasses, projection, and streaming), plus the repo’s maintenance docs (XR projection, streaming, camera capture, speech recognition).

Technical challenges (Android XR in practice)

Our V1 prototyping validated the opportunity, and also highlighted the current realities within the XR dev space:

  • Documentation scarcity: getting projection and “device capability access” (camera/mic, HUD output) working took time because references were limited, so we embedded learnings directly in the codebase.

  • Emulator gaps (speech is the headline): the XR emulator lacks Google Services sign-in and on-device speech recognition, forcing backend transcription, which adds latency and a less smooth UX than real hardware.

  • “Connected” doesn’t mean “usable”: pairing via the platform glasses app / Bluetooth is easy; reliably using that pair is where most engineering time goes.

  • Performance/compute constraints: the XR emulator can be RAM-hungry (we saw ~22GB), and hardware acceleration/driver mismatch can cause slowdowns.  We’ve now got good experience of how teams should set up dev rigs.

  • Keeping XR isolated: we leaned on process separation for XR activities (:xr_process) to avoid destabilising React Native.  Developing both at this point is important because of the lack of access to Android XR AI Glasses, but developing in parallel adds overhead. 

What next?

V1 is the “line in the sand”: we’ve created and tested working modes, documented XR patterns, and delivered an open repo to the XR community.  Next steps are about getting from prototype to field-ready:

  • Test on real Android XR Glasses device to specifically understand HUD legibility, audio quality, gesture/tap ergonomics, end-to-end latency.

  • Harden the workflow framework around the long-term product direction (Identify / Help / notes / stream)

  • Improve speech + streaming reliability by tightening local capture paths and making backend fallbacks explicit and observable.

  • Keep shipping in the open: share more XR recipes and “copy/pasteable” patterns for other Android XR teams.

If you’re building on Android XR and want a real, end-to-end reference (XR projection → camera/speech capture → AI answer streaming → remote support), the Super Spex Github repo is a great practical starting point.

We look forward to progressing Super Spex to the next level.  We’re always happy to chat team@superengineer.app  ;-)

Thinking

Feb 8, 2026

The evolution of the Tradesperson. Is the robot the future of Blue Collar work?

Elon Musk recently pivoted Tesla from manufacturing cars to autonomous vehicles & robots.  His vision of automated manual work a thing of the not-too-distant future propping up Tesla's valuation ($1.2 trillion at current time of writing) as car sales decline.

Robots are coming

Current demos of Musk's Optimus robots show them doing basic tasks in controlled environments such as folding shirts and sorting batteries into boxes.

Anecdotally, on Instagram the other day I watched a video of a robot loading wine glasses delicately into a dishwasher - suggesting the dexterity of robots has increased to a significant level.

That said, I've also watched a variety of robot fail videos too, including the now famous Russian robo-fail video.

My broad takeaway is that progress in robotics is impressive, but there's a long way to go.

Importance of World Models

Accelerated progress in autonomous robots is linked to advancements of AI, and in particular World Models.  World Models are distinct from Large Language Models (LLMs) which most of us are familiar with, and power apps like ChatGPT etc.

In simple terms, World Models model the 'physical world intelligence' i.e. the bit of human intelligence that allows us to interact with the real world and do stuff.  If you think of a human's intelligence, we have multiple levels of sophistication that enable us to do what we can do; we're not just a brain that can think and talk, but a physical being that can think, talk, move and interact.

The Large Language Model does the 'thinking' bit, and the World Model does the 'doing' bit - by helping the robot 'imagine before acting'.

But the doing bit is magnitudes more complex to model than the thinking bit.

Autonomous vehicles and low dimension problems

Consider self-driving cars.  It took c.15 years of R&D to produce an autonomous vehicle (worked started in c.2005, and the first Waymo-powered cars took to the streets in San Francisco in c.2020), but if you consider the complexity of the world problem that self-driving cars master it's relatively trivial.

Breaking down the main goal of autonomous vehicles to their core, their main goal is to 'not bump into anything else' - i.e. stop / swerve as quickly as possible when it's understood that there's an object in the way.

As impressive and futuristic as self-driving cars are, they're the low hanging fruit of World AI as they're only solving a very 'low dimension' problem. Low dimension in that the autonomous vehicle is operating in a fairly controlled environment with a single 'not goal' (i.e. don't crash!).

Real world complexity

When you compare an autonomous vehicle to say a Robo Tradesperson, the levels of complexity increase dramatically - in particular in relation to the infinite variables that real world environments pose to a Robo Tradesperson, compared to controlled environments of the autonomous vehicle where the variables are more easily predicted.

Just consider a simple scenario where an tradesperson visits your house to do a maintenance check on your intruder alarm. The engineer needs to:

a) locate the main system panel in the basement (down steep stairs)

b) open the alarm panel box, in cramped conditions (see the photo example below of a real life scenario - in my cellar!)

c) test the system battery amongst a sea of wires

d) then check the sensors in the property, which requires mounting a ladder, and reaching up to unscrew the sensor cover, change the device the battery, and then replace sensor cover etc. etc.

The future of Trades

So, where does this leave the future of Tradespeople?

At this point, I should introduce my Dad - and his work-based obsessions over engineer efficiency.  For the last 7 years I've been helping run my family's security company AMCO Security - installing / servicing intruder alarm systems.

My father has two main pet peeves which he believes are the secret to unlocking untold profitability: 1) Efficiency of engineers on the job (i.e. their ability to know what they're doing) 2) wasted time driving between jobs (i.e. dead time).

The basic economics of AMCO is that two thirds of revenue comes from engineer time.  The more time engineers are doing work for clients, the more money made.  Current weekly engineer productivity analysis indicates utility between 45 to 70%.

Imagine a time where that utility was closer to 100%. This is where Musk's and Super Engineer's world views converge.

AI augmentation not Robo-replacement

Given the magnitude of technical challenge ahead, a realistic prediction is that the Robo Engineer won't be a thing for at least 25 years - twice the time it took to produce self-driving cars.

The future vision of Trades is most likely one where simple / repetitive tasks are automated (e.g. driving), and complex real-world tasks are still carried out by human AI-augmented Tradespeople (i.e. Tradespeople assisted by AI via things like AI wearables - like Super Engineer & Super Spex).

A realistic near-future picture

To break-down the everyday activities of a Tradesperson, the near-future vision will look something like this:

  • Work planning / Briefing: jobs will be planned by an autonomous planner - estimating the most efficient routes and matching relevant workforce skillsets to specific job scenarios.

  • Transportation: this work plan will then be fed to a self-driving van, which will deliver the Tradesperson to their jobs - messaging the customer re: ETAs and unexpected delays, and detouring to Distributor partners to pick up parts / equipment (if necessary).

  • Site visit / manual work: when the Tradesperson reaches site, they'll pop on their Super Spex AI Glasses, and carry out the physical onsite work. The AI will guide the Tradesperson where necessary - acting as their personal assistant.

  • Reporting & communication: via AI Glasses technology (image & audio to text transcription), all work can be easily recorded and reported, and automatically saved / sent to HQ.

  • Trouble-shooting: any technical issue can be answered quickly via Super Engineer's AI - providing immediate specialist technical assistance.

Efficiency gains

Imagine if it was possible to increase engineer efficiency by say 15% (not a crazy increase) - due to AI augmentation speeding up time on site. This efficiency would drop straight through to utility increases (i.e. amount of engineer time spent on paid jobs) of say 10%.

At an average estimated cost of £20 an hour, and charge rate of £50 an hour, a 10% increase in utility of this resource is worth c.£7,000 extra revenue per year per engineer.

£100 billion impact

Consider then the sheer size of the service engineer market, which taking into account trades like electricians, fire / security, HVAC, Facility technicians is 600,000 in the UK, and 3 million in the US alone (estimated at 50 million globally).

Realistic estimates of the market revenue impacts of AI (based on a 10% utility increase, 50% adoption & 60% realisation rate) indicate an additional £1.2 billion revenue in the UK, £5.9 billion in the US - and a global impact of £100 billion.

Compliance and safety

Beyond economic gains, the use of AI in Trades has additional broader impacts linked to compliance and safety. In the UK, all industries require works to be implemented according to British Standards. The practical difficulty with this is the relative complexity of these standards, and the ability of the engineer to remember them - as the standards documents may run across 100's of pages.

Using AI like Super Engineer, British Standards can be directly injected into an engineer's workflows - in a simple / practical way, ensuring the Tradesperson works to the letter of the law.

The future is human

The future of the Tradesperson is clearly a very human-based future, which is hugely positive for the thriving community of Trades.

If you're an electrician, HVAC, fire/security technician, facility manager, then your jobs are safe for another 25+ years.

And excitingly, the timeline for this AI-enhanced vision of Trades is closer than you think:

  • AI Technical assistants are now available via Super Engineer

  • AI Glasses and wearables will be live in the next 12 months via Super Spex

  • Autonomous vans are likely to be available within a 2 to 10 year timeframe (depending on your location) and changes in legislation.

The future for Trades is one of the few areas of employment that will truly thrive in an AI world. For Trades, the future will be a world of AI-enhancements rather than robot domination.

Google AI Glasses demo

Thinking

Feb 1, 2026

How to Develop for Google AI Glasses with Android XR

Google founder Sergey Brin announced the Google Glasses in 2012. For context, this was before Google Drive [2012], the Pixel Phone [2016] and these acquisitions: Motorola, Waze, Nest, Boston Dynamics, Firebase and DeepMind... It's been a long time!

That announcement, posted to Google+ ( an unlucky Kondratiev wave surfer in the Digg cohort ) was titled 'Project Glass: One Day'.

What's possible today

14 years later, Google have now released an SDK to power the next phase of Google Glass - which is AI Glasses, powered by Android XR.

Having been working on ideas around Google's AI Glasses for a while now, we want to share practical insights into what's possible with the tech today.

Understanding Android XR libraries

To get under the hood of these glasses, you first need to understand the Android operating system and which libraries Google has released for development:

Android developers are familiar with using Application Libraries, which Google rebranded as Jetpack at I/O 2018. An example of a primitive Jetpack library is CameraX - which is stored at ⁠ androidx.camera.core.camera ⁠. It is important to make note of the ⁠ androidx ⁠ namespace, as that is where all the libraries live, including XR - the library for integrating with the glasses themselves.

Just like how you use the CameraX library to develop Camera based applications, the XR library is used to develop Glasses based applications.

How to go about Phone to Glasses?

Simple. The Google AI glasses can be reached via one canonical API. This API returns the ⁠ context ⁠ of the glasses - a reference to them, which you can call methods on.

⁠ val glassesContext = ProjectedContext.createProjectedDeviceContext(phoneContext) ⁠

Since we have the context, we can call any context taking libraries on our new variable glassesContext. I've asked Cursor to generate me a subset of the methods we can use, which includes sending data up to sensors, triggering photos from the phone & more.

How to go about Glasses to Phone?

This starts with the Android Manifest file - a file which runs before the program starts. You must link two attributes within it together:

1)`android:requiredDisplayCategory="xr_projected" ``

which tells Android you are connecting to the glasses, you would do similar to tell Android you are building a car app, using an attribute such as ⁠ android:name="androidx.car.app.category.MEDIA" ⁠

2)`android:name=".glasses.GlassesActivity"

which tells Android to link the class name GlassesActivity into a context to the glasses. The GlassesActivity becomes the context, and you can now run any of the APIs that take in context. Lots were shown in the above image, such as accessing the camera & sensors!

Data transfer between devices is handled via ⁠ onReceive ⁠ and ⁠ sendBroadcast ⁠ on a context - which we now know how to create on both the phone & glasses.

Super Spex - coming soon

Now back to building Super Spex!


Ai Glasses for tradesmen dev team

Team News

Jan 26, 2026

Helping Tradesmen stop breaking their phones

Speaking to a builder recently, he told me about the insane number of phones he’d dropped / broken in recent years whilst working on site.

For most tradesmen the phone is a key tool; taking photos to share with colleagues / clients; making notes;  taking measurements; researching parts; finding fixes; taking calls from HQ; giving & taking instructions via video calls.

Form-factor

Tradesmen spend a lot of time on the phone, but in most cases the phone itself is the wrong form-factor.

Tradesmen need to use their hands for tools;  tradesmen have famously fat fingers;  and most Tradesmen are not the biggest fans of typing - as if they were, they’d have got a job in an office.

That’s where the opportunity for AI Glasses as a next generation tool for Tradesmen comes in - as AI Glasses provide a handsfree & voice activated way to access instruction manuals, make notes, find parts, contact HQ,  live-stream calls from site.  

Say hello to Super Spex

And this is what we’re building with Super Spex.

Super Spex are AI Glasses for Tradesmen, giving them all the AI tools needed to do their job better - handsfree.  

Android XR +Super Engineer

From a tech perspective, we’re building Super Spex using Android XR to build a framework of Tradesmen-focused workflows for Google Glasses - all powered by Super Engineer’s AI smarts.

Michael & Dima (photo above!) joined our team recently to lead our Android XR product build, and in parallel we’re working with hardware partners to finesse the design & form-factor.

Partners welcome

If you’re a potential hardware partner, please drop us a note for a chat: spex@superengineer.app

Super Glasses - AR glasses for technicians

Team News

Nov 30, 2025

Super Glasses (AR / AI) Lead - we're hiring!

We're looking for a lead engineer to work on our AR Glasses interface development - Super Glasses.

Currently the AR glasses tech stack (Android XR etc.) is still in its relative infancy, however it's due to move quickly in 2026 and we want Super Engineer to be leaders in workforce AR / AI space with Super Glasses.


Super Glasses vision

Our vision for Super Glasses is to create the ultimate way for field service engineers to get AI assistance as they work - seamlessly experiencing Super Engineer's AI smarts whilst on the job.

Imagine you're an electrician.  You arrive at your first job of the day and you're faced with a piece of equipment - a complex intercom system - you've never work on before.

No fear, you're wearing Super Glasses. One tap on the frame starts 'ID mode' and the Super Glasses immediately use their camera to identify that the intercom is a Fermax Kin - with Super Engineer's AI giving you the answer audibly via an ear piece.

You then double tap, and explain the fault scenario - talking out loud, with Super Glasses microphone listening.  Seconds later you have Super Engineer audibly giving you a step-by-step explanation of how fix the fault via the ear piece -  with an option for visual instructions to appear via Super Glasses' heads-up-display (HUD).

Calmly you work through the fault fix procedure, double checking at any point (with a double tap) with Super Engineer if anything needs checking.

You have no fear with Super Engineer & Super Glasses.

Role detail

Because of the tech-cycle stage we're at with AR Glasses technology, this role is a hybrid role - combining R&D, product design and  engineering.  You'll also need to be hustler, hustling your way into the early OS dev releases & working out what we should be building on the fly - iterating, experimenting & networking.

  • AR Glasses Tech Stack R&D.  You'll need to explore the various different software, firmware & hardware options in the market and evaluate / navigate the best possible product platform path for Super Glasses.

    • Android XR

    • Samsung

    • XReal

    • Even Realities

    • Magic Leap

    • Gentle Monster

    • Warby Parker

    • Kering Eyewear

  • AR product design (inc UX). You'll need to think about the overall product design & UX - especially the AI software into hardware into user perspective. For example, consider different ways a field service engineer could interact with the AI during operational work - whether that's speaking to it (input) or output via audio / heads-up displays (HUD). Considerations of format (length / complexity of instructions) of AI for different media (audio / visual).

  • AI > AR integration systems design. Considering the balance between user need, user / tech adoption and tech sophistication (and reliability). For example, addressing the question of what level of instructions are feasible for a user to follow via HUD - taking into account user & technical considerations.

  • Software development. Choosing an OS, and developing the Super Glasses specific OS. Working various parters / teams, including software and hardware partners & UX designers (for the UX / hardware integration work)

  • Networking & communication. Making friends in the industry. Learning & sharing the Super Engineer love ;-)

How to apply

If you've got the hustle and technical chops to lead our Super Glasses project, drop us an email at team@superengineer.app. Tell us / share us what you want / what you think is relevant to get the job.

Team News

Nov 3, 2025

How Super Engineer fixed a Halloween Candy Chute?!

Last week I started sharing a beta version of Super Engineer (the email interface version) with a few friends to get initial feedback.

Feedback was mostly silence, apart from a random email from my friend Alex at the weekend who'd (in jest) decided to test whether Super Engineer could help fix a malfunctioning Halloween Candy Chute.

Genuinely helpful Halloween help

Amusingly (and amazingly!) Super Engineer delivered the goods - giving genuinely helpful advice on how to fix a giant Halloween Clown sweet dispenser!


Although done in jest, this test was super insightful for us as it helped prove out various edge-case variables we'd built into our AI so Super Engineer works as an all-round AI assistant beyond its core deep technical abilities.


Broadening Super Engineer's knowledge

Unbeknown to my friend, our team had spent all of October re-engineering our AI to deal with a wide range of queries and tasks beyond the likes of fire alarm / HVAC instruction manuals - as we identified that we need to make Super Engineer is to a Trades Person's true assistant - rather than just being a super laser-focused tech nerd.

AI beyond simple RAG

Technically what that's meant is we added in various layers of classifiers, re-routers and data synthesis that extends Super Engineer's AI beyond a simple RAG pipeline - giving Super Engineer true Super Powers - enabling it to deal with complex Fire Alarm queries as well as give Candy Chute fixes.

So, my friend's Halloween jest has given our team's month-long re-engineering work some timely validation. 

Give it a try

We're now encouraging a wider user-base to trial our AI - using our email. Simply email: help@superengineer.app

Load More