

I saw a Copilot prompt in MS PowerPoint today - top left corner of EVERY SINGLE SLIDE - and I had a quiet fit in my cubicle. Welcome to hell.
I saw a Copilot prompt in MS PowerPoint today - top left corner of EVERY SINGLE SLIDE - and I had a quiet fit in my cubicle. Welcome to hell.
Grapefruit interacts with specific metabolic pathways in the liver. Most medications are broken down by the liver. That’s just how the body works, unfortunately
I’ve seen publishers advertise their other titles within the box, which honestly, not an issue for me. These, however, are crossing a line.
See Alk’s comment above, I touched on medical applications.
As for commercial uses, I see very few. These devices are so invasive, I doubt they could be approved for commercial use.
I think the future of Brain Computer Interfacing lies in Functional Near Infrared Spectroscopy (FNIRS). Basically, it uses the same infrared technology as a pulse oximeter to measure changes in blood flow in your brain. Since it uses light (instead of electricity or magnetism) to measure the brain, it’s resistant to basically all the noise endemic to EEG and MRI. It’s also 100% portable. But, the spatial resolution is pretty low.
HOWEVER, the signals have such high temporal resolution. With a strong enough machine learning algorithm, I wonder if someone could interpret the signal well enough for commercial applications. I saw this first-hand in my PhD - one of our lab techs wrote an algorithm that could read as little as 500ms of data and reasonably predict whether the participant was reading a grammatically simple or complex sentence.
It didn’t get published, sadly, due to lab politics. And, honestly, I don’t have 100% faith in the code he used. But I can’t help but wonder.
A traditional electrode array needs to be as close to the neurons as possible to collect data. So, straight through the dura and pia mater, into the parenchyma where the cell axons and bodies are hanging out. Usually, they collect local data without getting any long distance information - which is a limiting factor to this technology.
The brain needs widespread areas to work in tandem to get most complicated tasks done. An electrode is great for measuring motor activity because those are pretty localized. But, something like memory and language? Not really possible.
There are electrocorticographic devices (ECoG) that places electrodes over a wide area and can rest on the pia mater, on the surface of the brain. Less invasive, but you still need a craniotomy to place the device. They also have less resolution.
The most practical medical purpose I’ve seen is as a prosthetic implant for people with brain/spinal cord damage. Battelle in Ohio developed a very successful implant and has since received DARPA funding: https://www.battelle.org/insights/newsroom/press-release-details/battelle-led-team-wins-darpa-award-to-develop-injectable-bi-directional-brain-computer-interface. I think that article over-sells the product a little bit.
The biggest obstacle to invasive brain-computer implants like this one is their longevity. Inevitably, any metal electrode implanted in the brain gets rejected by the immune system of the brain. It’s a well-studied process where a glial scar forms, neurons move away from the implant, and the overall signal of the device decreases. We need advances in biocompatibility before this really becomes revolutionary.
ETA: This device avoids putting metal in the brain and instead the device sends axons into the brain. Certainly a novel approach which runs into different issues. The new neurons need to be accepted by the brain, and they need to be kept alive by the device.
If they move the cell bodies into the brain and then had the device house axons and dendrites (neuron input and output), they could maybe let the brain keep the device alive. But that is a much more difficult installation procedure
Fantastic question, like Will_a said, I’ve never seen a device designed for input to the brain like this.
In this particular example, if someone were to compromise the device, even though it’s not able to “fry” their brain with direct electricity, they could overload the input neurons with a ton of stimulus. This would likely break the device because the input neurons would die, and it could possibly cause the user to have a seizure depending on how connected the input was to the users brain.
That does bring to mind devices like the one developed by Battelle, where the device reads brain activity and then outputs to a sleeve or cuff designed to stimulate muscles. The goal of the device is to act as a prosthesis for people with spinal cord injuries. I imagine that device was not connected to the internet in any way, but worst case scenario and a hacker compromises the device, they could cause someone’s muscle to sieze up.
Agree, fascinating question. To be precise, they used genetically modified neurons (aka optogenetics) to test if the device can deliver a signal into the brain. Optogenetics incorporates neurons modified with light-sensitive channel proteins, so the neuron activates when a precise wavelength of light is “seen” by the special protein. One of the coolest methods in neuroscience, in my opinion.
“To see if the idea works in practice they installed the device in mice, using neurons genetically modified to react to light. Three weeks after implantation, they carried out a series of experiments where they trained the mice to respond whenever a light was shone on the device. The mice were able to detect when this happened, suggesting the light-sensitive neurons had merged with their native brain cells.”
Oh neat, another brain implant startup. I published in this field. If anyone has questions, I’m happy to answer.
Subdermal is a lot easier than implanting in other compartments, e.g., intracranial. For example, hormonal birth control exists as an implant.
But, there’s fascinating research into how the brain rejects implanted electrodes, e.g., neuralink. Lots of work has been done developing materials that are less likely to be rejected by the brain and the brain’s immune system. For example, electrodes can be coated in chemicals to make them less harsh to the body, and flexible materials can be used.
I love my Etymotics. Bit pricey but so comfortable. I wore them for years as a musician and now for live shows.
I’ve been chasing the high from Elite Dangerous on my HOSAS setup for years. Thankfully, the recent updates have breathed some new life into the game.
X4. Come for the arcadey spaceflight simulator, stay for the galactic-scale empire building, leave for another save file once the Xenon start sending multiple I-class battleships against your Teladi allies but they cannot muster the strength to repel them and the entire gate network falls because you were too busy solving the Paranid Civil War.
Unfortunately there’s a lot of truth in that statement, especially in the case of rare disease. It’s really difficult to convince a company to spend billions to develop a treatment that will only cure 1 in 100,000 people without letting them charge an arm and a leg, and giving them a very long exclusivity deal so they can continue to charge high prices. So much of that cost to develop is due to the dozens of other failed drugs and formulations they tried on their way to success.
I don’t have a solution for the problem, and I’m always a little suspicious of anyone who claims it’s easy to solve. I think the UK has a decent idea, the NHS basically decides if the cost of a drug will be covered by insurance by comparing the expected benefit and the current cost. If the ratio is too skewed, they refuse to cover the medication. In theory, this should be an incentive for a company to charge less. In practice, it leads to some companies choosing not to market in the UK.
Here’s a bit of hope for you, scientists have figured out how to trick the body into producing any protein or antibody they want, through technology like gene therapy and mRNA vaccines. We’re about to cure a lot of diseases that used to be 100% fatal. Diseases that kill kids and adults alike.
Most things seem to be getting worse these days, but at least we’re making progress in other areas.
For me and mine, it’s carrots. Do you know how difficult it is to find carrot-free items? Impossible.
Agreed, seems like a no-brainer. Typically this stuff is handled at an institutional level, with bad professors losing/ failing to achieve tenure. But some results have much bigger implications than just “Uh oh, I cited that paper and it was a bad one.” Often, entire clinical pipelines are developed off of bad research, which wastes millions of dollars.
See also, the recent scandals in Alzheimer’s research. https://www.science.org/content/article/potential-fabrication-research-images-threatens-key-theory-alzheimers-disease
In grad school I worked with MRI data (hence the username). I had to upload ~500GB to our supercomputing cluster. Somewhere around 100,000 MRI images, and wrote 20 or so different machine learning algorithms to process them. All said and done, I ended up with about 2.5TB on the supercomputer. About 500MB ended up being useful and made it into my thesis.
Don’t stay in school, kids.
Nah. Fenced epee for a bit in a college club. Height advantage was pretty great. I guess it just depends on the weapon.
My favorite AI fact is from cancer research. The New Yorker has a great article about how an algorithm used to identify and price out pastries at a Japanese bakery found surprising success as a cancer detector. https://www.newyorker.com/tech/annals-of-technology/the-pastry-ai-that-learned-to-fight-cancer