Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate
0

AI-assisted low-cost ultrasound scanner

ACX Grants 2025BiomedicalGlobal health & development
🦑

Venkata Subhash Chandra Sadhu

ActiveGrant
$23,000raised
$59,000funding goal

Donate

Sign in to donate

Description of proposed project

Medical imaging tools are currently gated behind hospitals, permissions, and significant costs. This imposes a severe bottleneck on how often people get scanned. The high-performance tools are too big and bulky. The portable tools are low-performance.

This deficiency manifests in a whole range of problems people face outside of the hospital. The current standard of care is to deal with each without any real feedback from the body. People don’t typically think of these as “problems for medical imaging”, because we are not used to having imaging capabilities outside the hospital.

We wish to make medical imaging ubiquitous and smooth as smartphone cameras have become. Of the three main medical imaging modalities, X-rays are carcinogenic under repeated exposure, MRI scanners require large magnetic fields and are hard to miniaturize, leaving only ultrasound as the ideal solution. We are thus motivated to an ultrasound imaging device with a friction-free user experience which is also economical to manufacture at scale.

We are developing ultrasound imaging technology in a wearable patch form factor, that can leverage commercially available off-the-shelf parts which are mass manufactured for other applications. This is a fundamental improvement in sensing architecture to generate superior performance at low device cost, and not about just making existing ultrasound designs cheaper by mass-manufacturing the hardware.

Further, while canonical ultrasound imaging needs a skilled operator, our integrated (hardware + software + AI) system would make it trivial to aim the ultrasound device in the correct direction, and extract the relevant biomarker from the image without any further human intervention. This would enable anyone to use the device without training.

Such a device can serve as a general purpose platform that can:

  1. Massively increase the number of time points at which we scan (regular monitoring with smaller gaps, and over longer windows)

  2. Massively increase access to scanning, given the convenience of scanning from home, and for a fraction of the cost.

Beyond just the immediate consequences for individual health [1] such a wearable device can revolutionize the practice of healthcare research. As a couple of examples:

  • Better clinical trials: Currently compliance with study protocol & regulatory standards is poor, it is hard to recruit people and hard to measure the state of their health with high fidelity/resolution. An at-home continuous monitoring device would solve a lot of these problems, and possibly open a path (longer term) to cheaper + larger-scale clinical trials. Similar benefits in principle to something like the Oura ring, but with high-fidelity information in a completely new sensory modality.

  • Rich physiological dataset: Nobody has scanned the body’s internals at scale for millions of people from all over the world and assembled a tall + wide (many people + many biomarkers) dataset. Wide-spread use of such a scanning platform will generate data that can advance both our practice of medicine, and our understanding of human physiology.

While we see ground-breaking potential ahead, a journey of a thousand miles must begin with the first few steps. Here is what we’ve done so far, concretely:

  1. Tested the physics assumptions experimentally with an experiment.

  2. Built a simulated digital twin, for the core sensing hardware and the critical software elements based on reasonable + explicit physics assumptions.

  3. Modeled SWAP-C (size, weight, power & cost) for a first version of a hardware device.

  4. Discussed with doctors and potential/putative users to discover a staged roll-out strategy for the sequence in which applications could be enabled by our platform technology.

Our next step is to build a prototype that produces an ultrasound image, and demonstrates our claims of radically improving the size/cost/power vs performance tradeoff.

[1] Just to give one example of the application to individual health, let’s start with reproductive health. Period tracking can now become full menstrual cycle tracking. Currently flying blind with a few bits of input per month (often “how do I feel”), but high resolution continuous monitoring with a wearable device would substantially increase the fidelity. Apart from the obvious general-purpose benefits of cycle tracking and its applications for fertility planning (including IVF), this could also have applications to endemic diseases like PCOS, endometriosis. (We could imagine similar improvements aimed at other organs in the body. Happy to discuss more offline; we’re actively researching the full breadth of possible applications)

Why are you qualified to work on this?

There are two of us working on this project: Siva Swaminathan and Subhash Sadhu (me). We’ll talk about the both of us.

  • I (Subhash) developed semiconductor ICs for 3D Time-of-Flight imaging systems at Texas Instruments, including machine vision and camera/LiDAR development for very successful warehouse robots and consumer drones. I hold 10 granted USPTO patents related to this work, which were key to TI’s strategic dominance in 3D ToF imaging for many years. I designed subsystems within ICs, and also designed electronics and optics of full systems which went into manufactured products. I did Masters in Computational Imaging at MIT Media Lab where I designed the hardware and image reconstruction algorithms for LIDARs which can see around corners. I worked at an early-stage startup, and developed long-range Silicon Photonics based LIDAR. I solved many system level problems, and developed robust investor demos of the integrated system with hardware which was anything but robust.

  • Siva has long pursued the intersection of math/algorithms and the physical world. He has multiple peer-reviewed publications in physics and AI and several patents for developing key technology in each of his roles. He obtained a PhD in theoretical physics studying quantum gravity and particle physics, and then got interested in similarly flavored problems in signal processing and AI/robotics. He worked on computer vision and 3d computational geometry applied to robotics at Vicarious AI. Recently, he was an AI researcher at Google DeepMind, building intelligent agents that can leverage world models to solve problems efficiently through planning and reasoning.

Developing this device needs tight collaboration between people who have deep expertise in photonics, electronic hardware design and manufacturing, imaging and reconstruction algorithms, and AI. Together, we have complementary skills that span the technical background (combination of physics, hardware engineering, math & AI expertise) needed to build this technology.

How much money do you need?

The total amount we request is: $59,000 We see the project in three phases.

  1. The first goal is to demonstrate an ultrasound image with bench top instruments, and reconcile the predictions of the digital twin against experimental results. Budget is $23,000 over a timeline of 4 months. Money will be spent to purchase components, lab equipment, and some custom component fabrication services, to set up a lab and build a prototype.

  2. The second goal is to rebuild the imaging system with off the shelf components and reconcile the performance metrics with the previous measurements, over a timeline of 4 months. The budget for this would be $10,000 for components and lab equipment, and $8,000 for custom PCB design+assembly+testing.

  3. The third goal is to reconcile the system performance against existing ultrasound imaging systems. This would take around two months, and need a budget of $7000 to buy equipment and $5,000 to make custom mechanical components and test rigs.

Additionally, $6,000 would cover the rent and utilities for a small lab space (in Boston, which is where we’re building this) for a year.

Comments2Donations1Similar1
acx-grants avatar

ACX Grants

donated $23K
2025-11-06