ADAM PARTNERS

Archive for the ‘WARNING’ Category

PLEASE SIT DOWN. MACHINE WILL SEE YOU SHORTLY, NUMBER 445343-01.

In ARTIFICIAL INTELLIGENCE, WARNING on July 31, 2011 at 10:29 pm
An electronic medical record example

FIRST RULE OF ROBOTICS, DO NO HARM TO HUMANS, THE SECOND RULE. . .

 

IF YOU READ THIS carefully, one finds that Johns Hopkins says this: “More technology, less doctor = less technology more doctor. War is peace and peace is war.  You figure if there is just a tiny little bit of a falsehood contained in the piece below?

Consult with your doctors early and often. That’s not just good advice for patients; it’s what Stephanie Reel, the top IT officer at Johns Hopkins Medicine, says healthcare technology leaders must do to master intelligent medicine.

Speaking at this week’s InformationWeek Healthcare IT Leadership Forum in New York, Reel, head of IT at Johns Hopkins Medicine since 1994 and Chief Information Officer at the University since 1999, said the institution’s success in technology innovation is directly attributable to its habit of involving clinicians in IT projects. That point was backed up by Dr. Peter Greene, Johns Hopkins’ Chief Medical Information Office, who joined a panel discussion I led exploring “What’s Next In Intelligent Medicine.”

Data-driven decision-making helps develop a competitive edge.

Learn how to use real-time analytics to make educated business decisions.

There have been plenty of innovations at Johns Hopkins Medicine, a $5 billion-a-year organization that includes a renowned medical school, five hospitals, a network of physician offices, and massive research operations. The institution was among the pioneers of electronic health records (EHRs) through a clinical information system deployed in the early 1990s. The effort succeeded, Reel says, because it was initially supported by half a dozen clinicians who worked with IT to develop the system.

This interdisciplinary group has since grown to include about 75 people, and it still meets every month to “listen to the people on the front lines who are trying to make a difference,” Reel said.

Johns Hopkins’ clinical information system has evolved to embrace the latest EHR technologies, and it has also become the foundation for what Johns Hopkins calls “smart order sets.” These order sets have built-in checks, balances, and analytics to ensure that appropriate procedures, tests, and protocols are followed as appropriate for each patient.

Among the hundreds of smart order sets now in use at Johns Hopkins, one guides decision on appropriate regimens for diabetics. Hundreds of variables and possible recommendations are preprogrammed into the order set, but the right regimen is determined though the combination of known patient history, up-to-the-moment clinical measures, and feedback provided by doctors on a series of questions conditionally asked by the system based on known patient data and the clinician’s answers to key questions.

Smart order sets are developed by specialists and extensively studied by peer-review groups before they are embedded into patient care workflows. “The challenge is that you have to do a lot of custom work that isn’t included in off-the-shelf EHR products, so you can’t take on everything,” said Greene.

Johns Hopkins has prioritized based on risk, developing smart order sets for high-morbidity scenarios such as diabetic management and anticoagulation management.

For example, the institution has been widely recognized for its work on preventing venus thromboembolism (VTE), a dangerous blood-clotting condition that has been decreased by embedding intelligent risk-factor algorithms into admissions, post-operative, and patient-transfer order sets.

The approach has raised VTE assessment rates significantly, and VTE incidents have dropped significantly among at-risk patients, which is a huge achievement when lives are at stake.

One big risk to all this work is alert fatigue — the common problem whereby so-called intelligent systems and devices fire off so many alerts they are simply ignored. To minimize this, Johns Hopkins has built role-based and history-driven rules into many of its smart order sets.

Cardiologists, for example, would be assumed to be aware of the dangers of ordering both Cumadin and Ameoderone for the same patient whereas an intern would be shown an alert. But if an intern had already cleared such an alert for a particular patient on a particular day, the system recognizes that history and won’t alert that intern again that day regarding that patient.

At the cutting edge of intelligent medicine is personalized care. Smart order sets are part of Johns Hopkins’ strategy on that front, and the institution also has at least half a dozen departments working on other forms of personalized care.

The Department of Oncology, for instance, is exploring the use of genomics — study of the DNA of individual patients and of cancer cells. Even today, the presence of specific genetic biomarkers can trigger patient-specific recommendations about using, or not using, certain drugs, tests and protocols.

In the future, the DNA of both the patient and his or her cancer will be “readily available and integrated into every decision we’re making about your care,” Greene said, though he acknowledged that might be years from now.

Given that there are 3 billion base pairs in the human genome, the most advanced work will involve big-data computing. Oncology researchers at Johns Hopkins are collaborating with the university’s Department of Astronomy, which has a data center with rack upon rack of graphical processing units (GPUs, not CPUs) that are routinely applied to large-scale computational astronomy calculations.

Reel and Greene encouraged their peers to push the use of predictive analytics and use of data on the clinical side of their operations. Electronic health records and intelligent medicine aren’t where they should be, Reel said, in part because the financial incentives have favored administrative uses of the technology — reducing cost rather than improving diagnostics and clinical care.

“We talk often about productivity gains in medicine because of the introduction of technology, and systems tend to reward quantity rather than quality,” she said.

The next step in intelligent IT support for medicine, she said, would be to work with clinicians to minimize time-wasting usability, interoperability, and security hurdles so they can spend less time interacting with technology while still getting ever-smarter decision support. That, she said, will give doctors more time with their patients.

Advertisements

Crowd-simulating software OR the ability to kill masses of people FAST

In ARCHITECTURE, ARTICLES AND NEWS STORIES, ARTIFICIAL INTELLIGENCE, WARNING on July 28, 2011 at 5:29 pm
LDA entrance in Palestra House, designed by Wi...

WELL, OUR COMPUTERS TOLD US THIS WAS REALLY PRETTY. . .

GIST OF IT: While crowd simulation software has been developed before, the Bath/Bournemouth team hopes to use modern advances in processing power to create a more sophisticated program that models hundreds or thousands of individuals’ movements.

WHY SCYNET CARES:  Again, we feel our sense of identity being stripped away. Welcome to efficiency in the machine. This is the way prisons used to be constructed. Soulless,  while buildings used to be something that is both art and culture. Not anymore. Now its pure simulation. And beware of these studies that models movement. One day this very technology can of course find its way into  based upon models of their probable movement. Maybe the robot ship can fire at random spots where humans will be in the next fraction of time. Crazy fantastical rubbish? I hope so. In the meantime the technology will be constructing the space where YOU live.

 

 

A new project that uses artificial intelligence to model how crowds move could help architects design better buildings.  Researchers from Bath and Bournemouth universities are working with engineering consultancy Buro Happold to create software that shows how a building’s design can enable or prevent large numbers of people moving easily through it.

The program will create a visual representation of a crowd, modelling it as a group of many individual ‘agents’ instead of as a single mass of people and giving each agent its own goals and behaviour.

What Buro Happold wants to be able to understand is the impact of a space on the way people move,’ said Julian Padget, project supervisor and senior lecturer in computer science at Bath University.

‘There’s also the related question of what happens when a large volume of people are all trying to get somewhere rapidly, such as in an emergency situation.’

While crowd simulation software has been developed before, the Bath/Bournemouth team hopes to use modern advances in processing power to create a more sophisticated program that models hundreds or thousands of individuals’ movements.

The project will tackle the problems of simulating the crowds and rendering them in a believable way, from both a wide-angle and a close-up view, meaning the individuals have to appear realistic and show how their movements affect the rest of the group.

‘You don’t want it to look like a bunch of automatons wandering around — the reason being that it distracts the viewer, because they find it unnatural,’ said Padget. ‘They pay attention to that rather than what the picture overall is showing them.’

Instead of programming the computerised people with specific instructions, the computer will give them a destination and a range of actions to choose from and leave them to determine their own route, partly based on data gathered from observing real crowds.

But there are still limits to computational power and simulating greater numbers of people will require each individual character to have less intelligent programming, said Padget.

‘Our challenge is to work out what we can throw away from the sophisticated model and still get plausible-looking behaviour when we’ve got a large number of individuals.’

The simulation software will also need to be compatible with a suitable platform to render buildings designed by Buro Happold.

The four-year research project will be carried out by an engineering doctorate student through the universities’ Centre for Digital Entertainment, funded by the EPSRC.

 

 

Crowd-simulating software could improve building design | News | The Engineer.

CLEVER SENSE WILL SNIFF YOU OUT WHEREVER YOU EAT OR DRINK

In ARTICLES AND NEWS STORIES, ARTIFICIAL INTELLIGENCE, RELATED ARTICLES, WARNING on July 28, 2011 at 5:11 pm
Larry Page - Caricature

LARRY PAGE

 

 

  • GIST OF IT:  The app analyzes data from around the Web to figure out what you will like, based on similarities with other people. 
  • WHY SCYNET CARES:  How about being monitored while you do not even participate; how about being stereotyped by circumstantial evidence about your way of life, by a thing on somebody’s phone. How about losing your identity; how about this technology being used to profile you? How about paying more for health care because you had one pizza to many at the unhealthy local coffee shop? The nightmare scenarios are endless. This is even worse than Orwell could have imagined. All the little sheep walking freely into the machine! 

 

Google CEO Larry Page and Microsoft CEO Steve Ballmer agree on one thing: the future of search is tied in with artificial intelligence.

Page has talked about the ideal search engine knowing what you want BEFORE you ask it, and Ballmer recently explained Microsoft’s multibillion dollar investment in Bing by saying that search research is the best way to progress toward artificial intelligence apps that help you DO things, not just find things. So both companies will probably be taking a very close look at CleverSense, which launches its first iPhone app, a “personal concierge” called Alfred (formerly Seymour), today.

The app analyzes data from around the Web to figure out what you will like, based on similarities with other people. It’s similar to the recommendation engines pioneered by Amazon — “other people who bought X also bought Y” — or the Music Genome Project that eventually grew into Pandora. Only it’s applied to the real world.

CleverSense CEO Babak Pahlavan explains that the company grew out of a research project into predictive algorithms that he was working on at Stanford three years ago. The technology crawls the Web looking for what users are saying about particular products, and is able to categorize the results into between 200 and 400 attributes and sentiments for each one.

For instance, if somebody visits a coffee shop and posts on Yelp “the cappuccino at X was awesome but salad was crap,” CleverSense understands the words “awesome” and “crap,” and also notes that “cappuccino” is a high-interest word for coffee shops.

This kind of analysis is performed millions of times per day. When it launches, Alfred will have a database of more than 600,000 locations with between 200 and 400 categories rated ON EACH ONE. As you rate places, the app will get even more accurate.

Alfred is focused on four categories — bars, restaurants, nightclubs, and coffee shops — but CleverSense plans to apply its technology to other areas as well. Pahlvan explains that CleverSense could work very well with daily deals services like Groupon, LivingSocial, or Google Offers — instead of having merchants throw deals out to the entire world, they could target them at the users who would be most likely to buy.

At launch, the data is anonymous, but CleverSense is going to add Facebook Connect integration so it can add social data into its recommendations — if it knows that a lot of your friends are saying positive things about a particular bar, it will weigh those recommendations more highly than statements from random strangers.

The company has been running on an investment of about $6 million from angel investors, but Pahlavan says the company is planning to raise further rounds later this year. That’s assuming it doesn’t get snapped up by a big company first.

Microsoft may have an inside shot — CleverSense is participating in the company’s BizSpark program, which gives discounted software and other aid to startups — but there are tons of other companies who should be interested in the technology as well.


This iPhone App Knows What You Like — Before You Ask It A Single Question.

WHY WATSON, IBM’S NEW C-3PO IS VERY VERY DANGEROUS

In DANGEROUS DICIPLE, REFERENCE LIBRARY, Uncategorized, WARNING on July 28, 2011 at 4:53 pm
IBM Watson (Jeopardy at Carnegie Mellon) - How...

IBM Watson (Jeopardy at Carnegie Mellon)

GIST OF IT: IBM‘s Computer, Watson, now have enough language skills to beat humans in a game of which the questions involve “subtle meanings, irony, riddles,” and other linguistic complications.

WHY SCYNET CARES:  Read carefully. Watson was built for a mere $ 3 000 000.00. And it was tremendously successful in “being human” in the Jeopardy Show.  This is happening now, here, in real-time – not on some futuristic horror movie. And think about it. How will this artificial humanity be utilized and for whom will it work? It will be in the service of various corporations, designed to outsmart you in situations where you have to make choices. And the corporations have a different aim than you do. Think about it. Imagine a machine interviewing your son, and by using vast intelligence and all the records from your son’s life, it can succeed in getting your son, for example, to join the Marines; even though you know he would never have wanted that for himself? How do you protect yourself from superior intelligence aimed at getting you to do things that might not be good for you?

C-3PO is a protocol droid designed to serve humans, and boasts that he is fluent “in over six million forms of communication.”

This year’s FOSE conference had an unexpected theme: artificial intelligence.

The subject came up in Apple co-founder Steve Wozniak’s keynote speech on the first day of the conference. It took center stage, though, on day 2 of the conference, when IBM Research Vice President of Software David McQueeney presented the Beyond Jeopardy! – The Implications of IBM Watson keynote.McQueeney heads the division responsible for Watson, the $3 million natural language computer IBM made famous on the television quiz show Jeopardy!, where it bested two humans in a two-day trivia contest. At first, McQueeney seemed to be an odd choice of speaker for a gathering of government acquisitions professionals and the people who want to sell them goods and services – until he started to draw connections for the crowd.

The Challenge

IBM, McQueeney said, had left little doubt that computers, with sufficient processing power, could best humans at quantitative tasks – even those that involve some forethought and creativity. The company’s previous artificial intelligence experiment, known as Deep Blue, had defeated world champion Gary Kasparov in a series of chess matches in 1997. But a challenge remained: language. Computers, in the past, hadn’t been able to understand language the way humans use it. McQueeney explained, “When humans communicate with computers, they have to use a discrete, exact programming language.” But when humans communicate with one another, they are more creative and fluid. “Unstructured sentences are very hard for machines to process,” he said. Human speech involves shades of meaning and intent, even puns and word games, that flummox traditional computers. “It took us much more scientific effort and computational work to tackle human language than to win chess games,” he said.  Jeopardy, McQueeney said, was the ultimate challenge because the game’s questions involve “subtle meanings, irony, riddles,” and other linguistic complications. To succeed, the system couldn’t simply search through structured information (something that, McQueeney said, Watson did for just 15 percent of the questions asked in the contest). Instead, it had to parse meaning from sentences, much like a human does. “We consider it a long-standing challenge in artificial intelligence to emulate a slice of human behavior,” said McQueeney. Ultimately, the effort succeeded – Watson won the contest – but breakdowns in the company’s solutions were obvious. The computer struggled, McQueeney noted, with the shortest questions.

The Applications

So, IBM pulled off quite a parlor trick with, by McQueeney’s estimate, $3 million worth of off-the-shelf parts used to build Watson. But what does the technology mean for government?  McQueeney rattled off a series of possible applications for the system’s natural language abilities. For instance, he noted, Watson’s technology could be used to “support doctors’ differential diagnosis, using data in the form of cited medical papers and giving quick, real-time responses to questions.”  The technology could be used, he said, to assist technical support and help desk services, improve knowledge management at large companies, and improve information sharing among national security workers. “Computers can now support human interaction in new ways,” he said.

What’s Next

IBM expects to see the first application of the technology in the healthcare field, McQueeney said. The company is currently working with several medical schools to develop knowledge systems that will assist doctors with rapid medical decisions. The company has announced hopes to develop a commercial offering by the end of 2012.

FOSE 2011: IBM’s Artificial Intelligence Coming – Learn More at GovWin.

Florida Driver’s Information Sold

In ARTICLES AND NEWS STORIES, RELATED ARTICLES, Uncategorized, WARNING on July 27, 2011 at 11:35 pm
Topographic map of the State of Florida, USA (...

Image via Wikipedia

When you go for your driver’s license at the DMV the last thing you think about is your personal information being sold.

According to a report released by the State of Florida, the Department of Highway Safety and Motor Vehicles made about $62-million dollars.

They made that money by selling personal information of Florida drivers to companies like Lexis-Nexis and Choice Point, which conducts background checks. Information gathered include your full name, address and driving history.

We went out to speak with people in the Panama City area to see what they thought about their personal information being sold. Many didn’t even know their personal information was at the mercy of the state.

Christina Van-Dyk, “I didn’t know that and didn’t like the idea because I really don’t want other people knowing my information.”

Robin Hicks, “I don’t think it’s right, I think they should disclose if that’s the situation. You have to fill out forms and sign everything to get your driver’s license and that should be something that should be disclosed at that time.”

Although selling personal information is legal, the law says companies are not allowed to use that information to create new business. State officials insist they do not sell social security numbers to these companies.

via Florida Driver’s Information Sold.

SMART CARD ONLY SMART IF ACADEMIC RESEARCH CAN BE SUPPRESSED

In OUR MISSION, REFERENCE LIBRARY, UNDERSTANDING THE CONCEPTS, WARNING on July 5, 2011 at 7:20 am
Portrait of Ross Anderson, Professor of Securi...

Image via Wikipedia

If one reads the article  above, (Lets look at so-called Smart Cards) the impression is created that all is pretty well with the implementation of the Smart Card, that is, that the Smart Card is indeed, Smart.
But this is not sustained under closer scrutiny.  In fact, the Banks used every effort possible to suppress the publication of the study by Cambridge Scientists, up to the attempt to censure pure research. It is their way: to use blunt force trauma in order to suppress any form of deviation from their desired future course of events.
Please read the letter,   linked below, written by Ross J. Anderson of Cambridge.
Ross John Anderson, FRS, (born 1956) is a researcher, writer, and industry consultant in security engineering. He is Professor in Security Engineering at the University of Cambridge Computer Laboratory, where he is engaged in the Security Group.
In 1978, Anderson graduated with a BA in mathematics and natural science from Trinity College, Cambridge, and subsequently received a qualification in computer engineering. He worked in the avionics and banking industry before moving in 1992 back to the University of Cambridge, to work on his doctorate under the supervision of Roger Needham and start his career as an academic researcher. He received his PhD in 1995, and became a lecturer in the same year.  He lives near Sandy, Bedfordshire.
In cryptography, he designed with Eli Biham the BEAR, LION and Tiger cryptographic primitives, and coauthored with Biham and Lars Knudsen the block cipher Serpent, one of the finalists in the AES competition. He has also discovered weaknesses in the FISH cipher and designed the stream cipher Pike.
In 1998, Anderson founded the Foundation for Information Policy Research, a think tank and lobbying group on information-technology policy.
Anderson is also a founder of the UK-Crypto mailing list and the economics of security research domain.
He is well-known among Cambridge academics as an outspoken defender of academic freedoms, intellectual property, and other matters of university politics. He is engaged in the Campaign for Cambridge Freedoms and has been an elected member of Cambridge University Council since 2002. In January 2004, the student newspaper Varsity declared Anderson to be Cambridge University’s “most powerful person”.
In 2002, he became an outspoken critic of trusted computing proposals, in particular Microsoft’s Palladium operating system vision.
Anderson’s TCPA FAQ has been characterized by IBM TC researcher David Safford as “full of technical errors” and of “presenting speculation as fact.”
For years Anderson has been arguing that by their nature large databases will never be free of abuse by breaches of security. He has said that if a large system is designed for ease of access it becomes insecure; if made watertight it becomes impossible to use. This is sometimes known as Anderson’s Rule.
Anderson is the author of Security Engineering, published by Wiley in 2001, ISBN 0-471-38922-6.   He was the founder and editor of Computer and Communications Security Reviews.

YOUR OPINION ON POLL 1

In WARNING on July 5, 2011 at 6:23 am