ADAM PARTNERS

Archive for 2011|Yearly archive page

PLEASE SIT DOWN. MACHINE WILL SEE YOU SHORTLY, NUMBER 445343-01.

In ARTIFICIAL INTELLIGENCE, WARNING on July 31, 2011 at 10:29 pm
An electronic medical record example

FIRST RULE OF ROBOTICS, DO NO HARM TO HUMANS, THE SECOND RULE. . .

 

IF YOU READ THIS carefully, one finds that Johns Hopkins says this: “More technology, less doctor = less technology more doctor. War is peace and peace is war.  You figure if there is just a tiny little bit of a falsehood contained in the piece below?

Consult with your doctors early and often. That’s not just good advice for patients; it’s what Stephanie Reel, the top IT officer at Johns Hopkins Medicine, says healthcare technology leaders must do to master intelligent medicine.

Speaking at this week’s InformationWeek Healthcare IT Leadership Forum in New York, Reel, head of IT at Johns Hopkins Medicine since 1994 and Chief Information Officer at the University since 1999, said the institution’s success in technology innovation is directly attributable to its habit of involving clinicians in IT projects. That point was backed up by Dr. Peter Greene, Johns Hopkins’ Chief Medical Information Office, who joined a panel discussion I led exploring “What’s Next In Intelligent Medicine.”

Data-driven decision-making helps develop a competitive edge.

Learn how to use real-time analytics to make educated business decisions.

There have been plenty of innovations at Johns Hopkins Medicine, a $5 billion-a-year organization that includes a renowned medical school, five hospitals, a network of physician offices, and massive research operations. The institution was among the pioneers of electronic health records (EHRs) through a clinical information system deployed in the early 1990s. The effort succeeded, Reel says, because it was initially supported by half a dozen clinicians who worked with IT to develop the system.

This interdisciplinary group has since grown to include about 75 people, and it still meets every month to “listen to the people on the front lines who are trying to make a difference,” Reel said.

Johns Hopkins’ clinical information system has evolved to embrace the latest EHR technologies, and it has also become the foundation for what Johns Hopkins calls “smart order sets.” These order sets have built-in checks, balances, and analytics to ensure that appropriate procedures, tests, and protocols are followed as appropriate for each patient.

Among the hundreds of smart order sets now in use at Johns Hopkins, one guides decision on appropriate regimens for diabetics. Hundreds of variables and possible recommendations are preprogrammed into the order set, but the right regimen is determined though the combination of known patient history, up-to-the-moment clinical measures, and feedback provided by doctors on a series of questions conditionally asked by the system based on known patient data and the clinician’s answers to key questions.

Smart order sets are developed by specialists and extensively studied by peer-review groups before they are embedded into patient care workflows. “The challenge is that you have to do a lot of custom work that isn’t included in off-the-shelf EHR products, so you can’t take on everything,” said Greene.

Johns Hopkins has prioritized based on risk, developing smart order sets for high-morbidity scenarios such as diabetic management and anticoagulation management.

For example, the institution has been widely recognized for its work on preventing venus thromboembolism (VTE), a dangerous blood-clotting condition that has been decreased by embedding intelligent risk-factor algorithms into admissions, post-operative, and patient-transfer order sets.

The approach has raised VTE assessment rates significantly, and VTE incidents have dropped significantly among at-risk patients, which is a huge achievement when lives are at stake.

One big risk to all this work is alert fatigue — the common problem whereby so-called intelligent systems and devices fire off so many alerts they are simply ignored. To minimize this, Johns Hopkins has built role-based and history-driven rules into many of its smart order sets.

Cardiologists, for example, would be assumed to be aware of the dangers of ordering both Cumadin and Ameoderone for the same patient whereas an intern would be shown an alert. But if an intern had already cleared such an alert for a particular patient on a particular day, the system recognizes that history and won’t alert that intern again that day regarding that patient.

At the cutting edge of intelligent medicine is personalized care. Smart order sets are part of Johns Hopkins’ strategy on that front, and the institution also has at least half a dozen departments working on other forms of personalized care.

The Department of Oncology, for instance, is exploring the use of genomics — study of the DNA of individual patients and of cancer cells. Even today, the presence of specific genetic biomarkers can trigger patient-specific recommendations about using, or not using, certain drugs, tests and protocols.

In the future, the DNA of both the patient and his or her cancer will be “readily available and integrated into every decision we’re making about your care,” Greene said, though he acknowledged that might be years from now.

Given that there are 3 billion base pairs in the human genome, the most advanced work will involve big-data computing. Oncology researchers at Johns Hopkins are collaborating with the university’s Department of Astronomy, which has a data center with rack upon rack of graphical processing units (GPUs, not CPUs) that are routinely applied to large-scale computational astronomy calculations.

Reel and Greene encouraged their peers to push the use of predictive analytics and use of data on the clinical side of their operations. Electronic health records and intelligent medicine aren’t where they should be, Reel said, in part because the financial incentives have favored administrative uses of the technology — reducing cost rather than improving diagnostics and clinical care.

“We talk often about productivity gains in medicine because of the introduction of technology, and systems tend to reward quantity rather than quality,” she said.

The next step in intelligent IT support for medicine, she said, would be to work with clinicians to minimize time-wasting usability, interoperability, and security hurdles so they can spend less time interacting with technology while still getting ever-smarter decision support. That, she said, will give doctors more time with their patients.

Advertisements

Software fingers fake entries

In ARTIFICIAL INTELLIGENCE on July 31, 2011 at 10:16 pm
View of the Gables Great Hall, Cornell Univers...

GABLES GREAT HALL, CORNELL UNIVERSITY

Just as some models do not work well, some forms of Artificial Intelligence does seem to work very well indeed. And sadly, we, flesh and blood people lose when models fail, and we lose when AI does work, because in both instances, we become a function of the machine.

READ

If you’re like most people, you give yourself high ratings when it comes to figuring out when someone’s trying to con you. Problem is, most people aren’t actually good at it–at least as far as detecting fake positive consumer reviews.

Fortunately, technology is poised to make up for this all-too-human failing. Cornell University researchers have developed software that they say can detect fake reviews (PDF). The researchers tested the system with reviews of Chicago hotels. They pooled 400 truthful reviews with 400 deceptive reviews produced for the study, then trained their software to spot the difference.

The software got it right about 90 percent of the time. This is a big improvement over the average person, who can detect fake reviews only about 50 percent of the time, according to the researchers.

They say people fall into two camps. One type accepts too much at face value and doesn’t reject enough fake reviews. The second type is overly skeptical and rejects too many real McCoys. Despite their very different approaches, each camp is right about half the time.

The Cornell system is similar to software that sniffs out plagiarism. While the plagiarism software learns to spot the type of language a specific author uses, the Cornell software learns to spot the type of language people use when they’re being deceptive in writing a review, said Myle Ott, the Cornell computer science graduate student who led the research.

The software showed that fake reviews are more like fiction than the real reviews they’re designed to emulate, according to the researchers. In part, deceptive writers used more verbs than real review writers did, while the real writers used more punctuation than the deceptive writers. The deceptive writers also focused more on family and activities while the real writers focused more on the hotels themselves.

The research team’s next steps are to use the technique with other types of service reviews, like restaurant reviews, and eventually try it with product reviews. The idea is to make it harder for unscrupulous sellers to spam review sites with fictitious happy customers.

Of course, just about any technology can be used for good or evil. The Cornell fake review spotter “could just as easily be used to train people to avoid the cues to deception that we learned,” Ott said.

This could lead to an arms race between fake review producers and fake review spotters. Ott and his colleagues are gearing up for it. “We’re considering… seeing if we can learn a new set of deception cues, based on fake reviews written by people trained to beat our original system,” he said.

Answer: the one on the right is fake.

Cornell software fingers fake online reviews | Crave – CNET.

 

Mathematical Models, as the vocabulary of the MACHINE

In MATHEMATICAL MODELS on July 31, 2011 at 10:04 pm
Hippocrates stares down at the UCSF doctors, c...

HIPPOCRATES NO LONGER MATTERS?

Mathematical Models, as the vocabulary of the MACHINE has some or other magical spell it casts over people. As soon as the words are uttered, Mathematical Models take the place of common sense, though its common sense twice removed. And most of the time, they cannot be falsified, as they predict FUTURE events that have implic\tions for the here and now. Nobody cares in twenty or thirty years., Then the models are smiled at as archaic attempts. In the case of medicine models are not that lucky, and we see the real limitations they suffer from. They are VERY WRONG most of the time.  That’s the problem.

READ:

Tools designed to predict mortality may not forecast a patient’s demise very accurately, researchers have found.

In a meta-analysis of 118 predictive models, the median area under the curve (AUC) was just 0.77, John Ioannidis, MD, of the University of Ioannina in Greece, and colleagues reported online in the Archives of Internal Medicine.

“Most of the tools included in our analysis are not sufficiently accurate for wide use in clinical practice,” they wrote.

“I think that doctors should continue to use prognostic models but they should understand better what their performance is and its limitations,” Ioannidis told MedPage Today in an email. “Typically, prognostic performance is limited, which means that putting too much trust in a model can lead to a wrong conclusion about whether a patient will die or not.”

A multitude of prognostic tools have been developed in order to predict death in a wide variety of conditions, as forecasting mortality can help patients make sound medical decisions and have realistic expectations.

In order to assess the performance of these models, the researchers reviewed 94 studies, spanning 240 assessments of 118 predictive tools. All were published in 2009; to evaluate the entire predictive literature would be a huge task requiring hundreds of researchers, they explained.

The most commonly evaluated models included the Acute Physiology And Chronic Health Evaluation (APACHE) II model and the MELD score (Model for End-Stage Liver Disease).

The majority of the studies were done in the U.S. or Europe, had a prospective design, and pertained to acute disease conditions — mainly in critical illness; gastroenterology; and cardiovascular, infectious, and malignant diseases.

The median sample size of the studies was 502 patients, with a median number of deaths of 71.

Ioannidis and colleagues found that the AUC of the models ranged from 0.43 to 0.98, with a median of 0.77.

A total of 40% of the tools had an AUC of 0.80 or better, but only 10% had an AUC above 0.90.

Only 10 of the tools were evaluated in four studies or more. Of these 10, only one had a summary AUC above 0.80, the researchers said.

As well, there was marked heterogeneity of these tools across diverse settings and studies, which could be due to selective analysis and reporting of studies, leading to exaggerated results of predictive discrimination, they explained.

“Efforts at standardization of reporting are important in this regard,” they wrote.

Higher AUCs were associated with certain characteristics, including lower journal impact factor and larger sample size (P=0.01 and P=0.02, respectively).

The association with journal impact factor may be explained by lower methodologic quality accepted in those journals, the researchers said.

Higher predictive value of the tools was also seen among the highest-risk patients, and among children (P=0.002 and P<0.001, respectively).

The study was limited because it was restricted to analyses published in a single year, and because it only evaluated AUC, even though there are other metrics to determine predictive value.

Still, the researchers concluded that efforts are needed to “organize and synthesize the predictive literature,” in order to ultimately “enhance the evidence derived from predictive research and to establish standard methods for developing, evaluating, reporting, and eventually adopting new predictive tools in clinical practice.”

In the meantime, they said, clinicians “should be cautious about adopting new, initially promising predictive tools, especially complex ones based on expensive measurements that have not been extensively validated and shown to be consistently useful in practice.”

 

 

FROM

Kristina Fiore, Staff Writer, MedPage Today
Published: July 26, 2011
Reviewed by Robert Jasmer, MD; Associate Clinical Professor of Medicine, University of California, San Francisco and
Dorothy Caputo, MA, RN, BC-ADM, CDE, Nurse Planner

Crowd-simulating software OR the ability to kill masses of people FAST

In ARCHITECTURE, ARTICLES AND NEWS STORIES, ARTIFICIAL INTELLIGENCE, WARNING on July 28, 2011 at 5:29 pm
LDA entrance in Palestra House, designed by Wi...

WELL, OUR COMPUTERS TOLD US THIS WAS REALLY PRETTY. . .

GIST OF IT: While crowd simulation software has been developed before, the Bath/Bournemouth team hopes to use modern advances in processing power to create a more sophisticated program that models hundreds or thousands of individuals’ movements.

WHY SCYNET CARES:  Again, we feel our sense of identity being stripped away. Welcome to efficiency in the machine. This is the way prisons used to be constructed. Soulless,  while buildings used to be something that is both art and culture. Not anymore. Now its pure simulation. And beware of these studies that models movement. One day this very technology can of course find its way into  based upon models of their probable movement. Maybe the robot ship can fire at random spots where humans will be in the next fraction of time. Crazy fantastical rubbish? I hope so. In the meantime the technology will be constructing the space where YOU live.

 

 

A new project that uses artificial intelligence to model how crowds move could help architects design better buildings.  Researchers from Bath and Bournemouth universities are working with engineering consultancy Buro Happold to create software that shows how a building’s design can enable or prevent large numbers of people moving easily through it.

The program will create a visual representation of a crowd, modelling it as a group of many individual ‘agents’ instead of as a single mass of people and giving each agent its own goals and behaviour.

What Buro Happold wants to be able to understand is the impact of a space on the way people move,’ said Julian Padget, project supervisor and senior lecturer in computer science at Bath University.

‘There’s also the related question of what happens when a large volume of people are all trying to get somewhere rapidly, such as in an emergency situation.’

While crowd simulation software has been developed before, the Bath/Bournemouth team hopes to use modern advances in processing power to create a more sophisticated program that models hundreds or thousands of individuals’ movements.

The project will tackle the problems of simulating the crowds and rendering them in a believable way, from both a wide-angle and a close-up view, meaning the individuals have to appear realistic and show how their movements affect the rest of the group.

‘You don’t want it to look like a bunch of automatons wandering around — the reason being that it distracts the viewer, because they find it unnatural,’ said Padget. ‘They pay attention to that rather than what the picture overall is showing them.’

Instead of programming the computerised people with specific instructions, the computer will give them a destination and a range of actions to choose from and leave them to determine their own route, partly based on data gathered from observing real crowds.

But there are still limits to computational power and simulating greater numbers of people will require each individual character to have less intelligent programming, said Padget.

‘Our challenge is to work out what we can throw away from the sophisticated model and still get plausible-looking behaviour when we’ve got a large number of individuals.’

The simulation software will also need to be compatible with a suitable platform to render buildings designed by Buro Happold.

The four-year research project will be carried out by an engineering doctorate student through the universities’ Centre for Digital Entertainment, funded by the EPSRC.

 

 

Crowd-simulating software could improve building design | News | The Engineer.

CLEVER SENSE WILL SNIFF YOU OUT WHEREVER YOU EAT OR DRINK

In ARTICLES AND NEWS STORIES, ARTIFICIAL INTELLIGENCE, RELATED ARTICLES, WARNING on July 28, 2011 at 5:11 pm
Larry Page - Caricature

LARRY PAGE

 

 

  • GIST OF IT:  The app analyzes data from around the Web to figure out what you will like, based on similarities with other people. 
  • WHY SCYNET CARES:  How about being monitored while you do not even participate; how about being stereotyped by circumstantial evidence about your way of life, by a thing on somebody’s phone. How about losing your identity; how about this technology being used to profile you? How about paying more for health care because you had one pizza to many at the unhealthy local coffee shop? The nightmare scenarios are endless. This is even worse than Orwell could have imagined. All the little sheep walking freely into the machine! 

 

Google CEO Larry Page and Microsoft CEO Steve Ballmer agree on one thing: the future of search is tied in with artificial intelligence.

Page has talked about the ideal search engine knowing what you want BEFORE you ask it, and Ballmer recently explained Microsoft’s multibillion dollar investment in Bing by saying that search research is the best way to progress toward artificial intelligence apps that help you DO things, not just find things. So both companies will probably be taking a very close look at CleverSense, which launches its first iPhone app, a “personal concierge” called Alfred (formerly Seymour), today.

The app analyzes data from around the Web to figure out what you will like, based on similarities with other people. It’s similar to the recommendation engines pioneered by Amazon — “other people who bought X also bought Y” — or the Music Genome Project that eventually grew into Pandora. Only it’s applied to the real world.

CleverSense CEO Babak Pahlavan explains that the company grew out of a research project into predictive algorithms that he was working on at Stanford three years ago. The technology crawls the Web looking for what users are saying about particular products, and is able to categorize the results into between 200 and 400 attributes and sentiments for each one.

For instance, if somebody visits a coffee shop and posts on Yelp “the cappuccino at X was awesome but salad was crap,” CleverSense understands the words “awesome” and “crap,” and also notes that “cappuccino” is a high-interest word for coffee shops.

This kind of analysis is performed millions of times per day. When it launches, Alfred will have a database of more than 600,000 locations with between 200 and 400 categories rated ON EACH ONE. As you rate places, the app will get even more accurate.

Alfred is focused on four categories — bars, restaurants, nightclubs, and coffee shops — but CleverSense plans to apply its technology to other areas as well. Pahlvan explains that CleverSense could work very well with daily deals services like Groupon, LivingSocial, or Google Offers — instead of having merchants throw deals out to the entire world, they could target them at the users who would be most likely to buy.

At launch, the data is anonymous, but CleverSense is going to add Facebook Connect integration so it can add social data into its recommendations — if it knows that a lot of your friends are saying positive things about a particular bar, it will weigh those recommendations more highly than statements from random strangers.

The company has been running on an investment of about $6 million from angel investors, but Pahlavan says the company is planning to raise further rounds later this year. That’s assuming it doesn’t get snapped up by a big company first.

Microsoft may have an inside shot — CleverSense is participating in the company’s BizSpark program, which gives discounted software and other aid to startups — but there are tons of other companies who should be interested in the technology as well.


This iPhone App Knows What You Like — Before You Ask It A Single Question.

WHY WATSON, IBM’S NEW C-3PO IS VERY VERY DANGEROUS

In DANGEROUS DICIPLE, REFERENCE LIBRARY, Uncategorized, WARNING on July 28, 2011 at 4:53 pm
IBM Watson (Jeopardy at Carnegie Mellon) - How...

IBM Watson (Jeopardy at Carnegie Mellon)

GIST OF IT: IBM‘s Computer, Watson, now have enough language skills to beat humans in a game of which the questions involve “subtle meanings, irony, riddles,” and other linguistic complications.

WHY SCYNET CARES:  Read carefully. Watson was built for a mere $ 3 000 000.00. And it was tremendously successful in “being human” in the Jeopardy Show.  This is happening now, here, in real-time – not on some futuristic horror movie. And think about it. How will this artificial humanity be utilized and for whom will it work? It will be in the service of various corporations, designed to outsmart you in situations where you have to make choices. And the corporations have a different aim than you do. Think about it. Imagine a machine interviewing your son, and by using vast intelligence and all the records from your son’s life, it can succeed in getting your son, for example, to join the Marines; even though you know he would never have wanted that for himself? How do you protect yourself from superior intelligence aimed at getting you to do things that might not be good for you?

C-3PO is a protocol droid designed to serve humans, and boasts that he is fluent “in over six million forms of communication.”

This year’s FOSE conference had an unexpected theme: artificial intelligence.

The subject came up in Apple co-founder Steve Wozniak’s keynote speech on the first day of the conference. It took center stage, though, on day 2 of the conference, when IBM Research Vice President of Software David McQueeney presented the Beyond Jeopardy! – The Implications of IBM Watson keynote.McQueeney heads the division responsible for Watson, the $3 million natural language computer IBM made famous on the television quiz show Jeopardy!, where it bested two humans in a two-day trivia contest. At first, McQueeney seemed to be an odd choice of speaker for a gathering of government acquisitions professionals and the people who want to sell them goods and services – until he started to draw connections for the crowd.

The Challenge

IBM, McQueeney said, had left little doubt that computers, with sufficient processing power, could best humans at quantitative tasks – even those that involve some forethought and creativity. The company’s previous artificial intelligence experiment, known as Deep Blue, had defeated world champion Gary Kasparov in a series of chess matches in 1997. But a challenge remained: language. Computers, in the past, hadn’t been able to understand language the way humans use it. McQueeney explained, “When humans communicate with computers, they have to use a discrete, exact programming language.” But when humans communicate with one another, they are more creative and fluid. “Unstructured sentences are very hard for machines to process,” he said. Human speech involves shades of meaning and intent, even puns and word games, that flummox traditional computers. “It took us much more scientific effort and computational work to tackle human language than to win chess games,” he said.  Jeopardy, McQueeney said, was the ultimate challenge because the game’s questions involve “subtle meanings, irony, riddles,” and other linguistic complications. To succeed, the system couldn’t simply search through structured information (something that, McQueeney said, Watson did for just 15 percent of the questions asked in the contest). Instead, it had to parse meaning from sentences, much like a human does. “We consider it a long-standing challenge in artificial intelligence to emulate a slice of human behavior,” said McQueeney. Ultimately, the effort succeeded – Watson won the contest – but breakdowns in the company’s solutions were obvious. The computer struggled, McQueeney noted, with the shortest questions.

The Applications

So, IBM pulled off quite a parlor trick with, by McQueeney’s estimate, $3 million worth of off-the-shelf parts used to build Watson. But what does the technology mean for government?  McQueeney rattled off a series of possible applications for the system’s natural language abilities. For instance, he noted, Watson’s technology could be used to “support doctors’ differential diagnosis, using data in the form of cited medical papers and giving quick, real-time responses to questions.”  The technology could be used, he said, to assist technical support and help desk services, improve knowledge management at large companies, and improve information sharing among national security workers. “Computers can now support human interaction in new ways,” he said.

What’s Next

IBM expects to see the first application of the technology in the healthcare field, McQueeney said. The company is currently working with several medical schools to develop knowledge systems that will assist doctors with rapid medical decisions. The company has announced hopes to develop a commercial offering by the end of 2012.

FOSE 2011: IBM’s Artificial Intelligence Coming – Learn More at GovWin.

Florida Driver’s Information Sold

In ARTICLES AND NEWS STORIES, RELATED ARTICLES, Uncategorized, WARNING on July 27, 2011 at 11:35 pm
Topographic map of the State of Florida, USA (...

Image via Wikipedia

When you go for your driver’s license at the DMV the last thing you think about is your personal information being sold.

According to a report released by the State of Florida, the Department of Highway Safety and Motor Vehicles made about $62-million dollars.

They made that money by selling personal information of Florida drivers to companies like Lexis-Nexis and Choice Point, which conducts background checks. Information gathered include your full name, address and driving history.

We went out to speak with people in the Panama City area to see what they thought about their personal information being sold. Many didn’t even know their personal information was at the mercy of the state.

Christina Van-Dyk, “I didn’t know that and didn’t like the idea because I really don’t want other people knowing my information.”

Robin Hicks, “I don’t think it’s right, I think they should disclose if that’s the situation. You have to fill out forms and sign everything to get your driver’s license and that should be something that should be disclosed at that time.”

Although selling personal information is legal, the law says companies are not allowed to use that information to create new business. State officials insist they do not sell social security numbers to these companies.

via Florida Driver’s Information Sold.

Your Future Co-Worker

In ARTICLES AND NEWS STORIES, RELATED ARTICLES, Uncategorized on July 27, 2011 at 9:38 pm
"JSC2009-E-155300 (28 July 2009) --- Robo...

Image via Wikipedia

THE GIST:

  • NASA and GM have unveiled robots that work alongside humans — on Earth and in space.
  • Engineers are trying to mimic human form and have the Robonaut work at human speeds.
  • NASA may employ the robots as spacewalkers’ assistants.




Robot twins, intended to lend a hand to spacewalking astronauts, as well as make the factory floor a safe and efficient meeting ground for humans and droids, has been unveiled by NASA and General Motors.

“A giant robot swinging around that doesn’t know whether a person is there or not is a bad thing. You can end up with all kinds of accidents. Robots can be very dangerous pieces of equipment,” Marty Linn, GM’s principal engineer of robotics, told Discovery News.

Large robots currently used in GM’s factories are caged to protect workers.

For the past three years, engineers from NASA and GM have been working on the prototypes, called Robonauts, at the Johnson Space Center in Houston. GM’s droid likely will end up at the firm’s technology development center in Michigan, where engineers will use it as a test bed for sensors, software and other products that could be incorporated into future cars. It could also improve manufacturing processes.

“We envision this kind of technology to be able to be used right around humans. Both NASA and GM share this vision of humans and robots working together,” said Linn.

“This is a human-scale robot. It works at human speeds. We’re working closer and closer to the human form, and that’s a difficult challenge,” added Ron Diftler, who oversees the Robonaut project for NASA.

NASA would like to see a robot in space, with enough dexterity to handle pliable insulation and other materials too tricky for the cranes and robotic arms available on the space station today.

“We are foreseeing this as an EVA (extravehicular activity, or spacewalk) assistant,” Diftler said.

For example, the droid could save time and reduce risks to spacewalking astronauts by going outside first to prepare work sites.

The Robonauts, which were unveiled Thursday, are based on work NASA and the Defense Advanced Research Projects Agency did a decade ago.

Artificial Intelligence

In RELATED ARTICLES on July 27, 2011 at 9:12 pm
Self-Assembled DNA Nanostructures. (A) DNA “ti...

Image via Wikipedia

Artificial Intelligence Made From DNA : Discovery News.

Eat your heart out Steven Spielberg — turns out artificial intelligence is not just a figment of your imagination.

A team of researchers lead by Lulu Qian from the California Institute of Technology (Caltech) have for the firstdeveloped an artificial neural network — that is, the beginnings of a brain — out of DNA molecules.

They turned to molecules because they knew that before the neural-based brain evolved, single-celled organisms showed limited forms of intelligence. These microorganisms did not have brains, but instead had molecules that interacted with each other and spurred the creatures to search for food and avoid toxins. The bottom line is that molecules can act like circuits, processing and transmitting information and computing data.

The Caltech used DNA molecules specifically for the experiment, because these molecules interact in specific ways determined by the sequence of their four bases: adenine (abbreviated A), cytosine (C), guanine (G) and thymine (T). And what’s more, scientists can encode the sequence into strands of DNA molecules, essentially programming them to function in a predetermined way.

Without getting too complicated, Qian and her team created four highly simplified artificial neurons in test tubes comprised of 112 strands of DNA, each strand programmed with a specific sequence of bases to interact with other strands. The interactions resulted in outputs (or not), basically mimicking the actions of neurons firing. In order to see the DNA neurons firing, the scientists attached a fluorescent molecular marker that lit up when activated.

Next, the researchers played a trivia game with the neural network to see if it could identify one of four scientists based on a series of yes/no questions. Basic information related to the identity of the scientists was given to the tiny DNA brain in the form of encoded strands of DNA.

To quiz the brain, a human player placed DNA strands that hinted at the answer into the test tube. With these clues, the neural network was able to produce the correct answer, which was visible thanks to the fluorescent markers.

In this way, the network could also communicate when it lacked enough information to correctly identify one of the scientists, or if any of the clues contained contradictory information.

The research team played this game using 27 possible ways of answering questions and the neural network in the test tube answered correctly each time.

Artificial Intelligence Diagnoses Abuse : Discovery News

In DAILY LOG, NEURAL NETWORKS, RELATED ARTICLES on July 27, 2011 at 9:02 pm
There is no consensus on how closely the brain...

Image via Wikipedia

broken bone one day, a particular infection a few months later and depression the following year may appear to be separate, medical issues.

However, to a new artificial intelligence program developed by Boston doctors, these are all symptoms of domestic abuse.

The new software can identify abuse victims up to six years before these cases would otherwise be found and could eventually be used to diagnose just about any disease or injury.

“It’s very difficult to detect domestic abuse because it often happens in the privacy of the home,” said Ben Reis, a doctor at Children’s Hospital Boston (CHB) who helped develop the program.

“Doctors are often on the front lines of detecting abuse, but so often the doctor is focused on treating the injury, they don’t see the context behind it.”

Dozens of studies over the last 40 years have correlated various illnesses, injuries and other conditions with abuse. Bruising to the middle of the forearm or the core of the body instead of the elbow or knee can signal abuse. Depression or alcoholism may also be symptoms of this condition.

http://news.discovery.com/tech/artificial-intelligence-domestic-abuse.html