Thursday, December 1, 2011

Faster than the Internet

A Chinese company, BGI, employs 167 DNA sequencers, who altogether produce about 2,000 human genomes worth of data every day. Apparently they collect so much data that they outpace the maximum information transfer speeds currently available on the internet, so they have to send out disks with the data on them using FedEx. I actually find it humorous that we are so good at reading DNA sequences now that we have to revert to a somewhat archaic means of data exchange rather than just using the internet. According to the article I read, DNA sequencing is accelerating much faster than Moore’s law allows computing to accelerate – this reminded me of the several mentions and discussion of Moore’s law we had throughout class. I can’t seem to justify sequencing such a vast amount of DNA if the data is too vast to be thoroughly analyzed. It would make much more sense to me if DNA analysts just ramped down their examination pace a little bit and spent more time analyzing the data they extracted.

Supposedly, it’s going to cost under a thousand dollars for one person to have their entire DNA sequence read and examined within a year or two. This is really cool, but I am skeptical, in that I wonder what we can actually do with that data at this point in our technological development. Are we already able to diagnose medical issues by finding irregularities in DNA? I would think that something as scientifically and medically significant as the ability to diagnose problems preventatively via the use of DNA sequencing would have at least made the news, but I have heard next to nothing about this. I also wonder if, when this technology becomes mainstream, people will argue about its ethics. People argue endlessly over the moral correctness of stem cells and stem cell research and I see nothing stopping them from bringing up the very same or similar points about DNA sequencing technology.

I look forward to the future of DNA sequencing as a medical aid, and I especially look forward to it when tie comes cheap enough for everyone to use. Perhaps at some point in the future every newborn child will have their DNA sequenced within the first hours of their life, and perhaps that sort of practice will eliminate infant mortality. Perhaps people will live for 200 years once we can prevent and cure diseases before they become problematic. Pretty much everything seems within the adjacent possible of DNA sequencing and repair.

Thursday, November 24, 2011

Brain Implants

As a continuation of the themes we talked about when we read Neuromancer, I’d like to talk about some recent improvements in the field of brain implants. Prior to a new development in brain implantation technology, the electrodes attached to implants were limited to about 100 due to bulky wires having to be attached to each and every one. This technology has been mainstream since the mid twentieth century and hasn’t changed much until just recently. The problem with such a low number of electrodes is that it can’t provide a very clear picture of brain activity, which could possibly lead doctors to overdo treatments and do invasive procedures when not necessarily needed.

Scientists at the University of Pennsylvania have recently developed a new type of sensor array that can fit 360 sensors into its much less bulky surface. It’s said to give a roughly 50 times better resolution than its predecessor, which could revolutionize the treatment of epilepsy and other disorders caused by incorrect brain function. Scientists’ biggest need, in terms of brain treatment, is to get a clearer picture of what’s going on inside the brain itself. By researching healthy brains then comparing them to unhealthy ones, they are likely going to be able to pinpoint problem areas and potentially treat them specifically.

I think this is really cool. I wasn’t even aware that brain implants actually existed at all. This reminds me a lot of Neuromancer, except it’s different in that these are necessities rather than leisure items. It’s very interesting to me that people can have their illnesses “diagnosed” with the help of a little chip that can be put in their head. It seems like science fiction, even to me. I hope that in the coming days the scientists working on these new implants will continue to increase the number of sensors they can fit per unit area, and I hope these increases will lead to new developments in the field of medicine and anatomy. I think this technology has a whole lot o f possibility even beyond what it already does. 

Saturday, November 19, 2011

Can the Internet Predict the Future?

Data is constantly becoming more and more available across the internet as people rely more and more on computers for their day to day lives. With these ever expanding databases available to the general public, companies have begun attempts to pore through them and create “temporal indexes” of tranding statements on the internet. These programs work by searching through vast numbers of websites, of all types, and looking for predictions. One major company involved in this kind of industry is Recorded Future, which claims to search through 100,000 websites hourly of any type imaginable. Generally, their services go for nearly $10,000 per month, but lately they have offered a more individual version of the product for less than $200 a month. That’s something many individuals could afford, which makes me wonder, why would anyone need to predict the future like this?

I have a lot of skepticism about this kind of technology. First and foremost, I would like to know how the programs distinguish between factual information and speculation or opinion. I simply do not see how a computer, reading information at the rate of 100,000 websites per hour, can separate out the useful information from the junk. Assuming, however, that it can, there are more questions as to its methods. How does it treat information that is only up on a website for a short time then is taken down? I don’t think this should just be treated the same way as any permanent information, as there was obviously some reason for it to be removed, be it false, slanderous, or otherwise. Apparently Recorded Future’s more expensive packages are purchased by hedge funds who use them to “predict the future” and enhance their trading profits.

Assuming that these technologies are legitimate, and as reliable as they are said to be, they are extremely interesting to me. The hedge fund product predicts stock values five days in advance, which is absolutely remarkable. I would love to see some sort of data concerning the accuracy of its predictions. Five days is a lifetime in the stock market – someone with that long a foresight could easily out trade their competitors, who were bound to working with real-time data. This seems obvious to the experienced investor, but these programs have proven that the news really does affect the stock market. It’s said that when people expect corporate restructuring, the stock of the company in question has a tendency to drop approximately 2 percent over the next month. I think this is proof of a previously only intuitive concept.

I actually fear the day when everyone has access to this type of technology. If everyone is trading stocks with a five day window into the future, how will that affect the market itself? I can’t see how it could possibly retain stability with millions upon millions of people trading with knowledge of the future. 

Sunday, November 13, 2011

Why Using the Internet is Getting More Dangerous

Scientists are worried about the future of data transmission throughout the internet. People seem to consider the Internet as a permanent thing that can never be absent, but that is actually incorrect. Supposedly, Amazon and Google have both experienced outages in their respective business and mail clouds (basically a huge shifting conglomeration of data) due to a sort of “Internet failure”. I had never considered the Internet as something that could break down, but it does make some sense that it has its weaknesses. The internet bears a larger and larger data “burden” by the hour and it isn’t some untouchable, invincible thing – logically, it is bound to falter at some point or another.

Being that almost everyone uses the internet daily for a range of tasks, varying in importance to their lives, it’s actually kind of scary to consider the possibility of some sort of “Internet blackout”. Think about what you do every day that involved the internet – the first things that come to mind are probably trivial, such as social networking. However, some deeper thought will likely lead to more significant uses: checking the news; staying in touch with family members; making purchases or sales of things. Imagine the implications an all-encompassing internet outage would have for the stock market or for financial transactions in general. Banks wouldn’t be able to access their stored information which would either dramatically complicate or altogether halt money transfer, and commerce would slow or stop.

In addition to the possibility of “overloading” the internet to the point of its failure, there is also the possibility of increased hostility of users of the internet. If data transfer is 100 times faster, you can fairly logically state that hacking, identity theft, and other malicious activity via the internet will become 100 times faster. This means that money needs to be spent on beefing up security systems. Just today (Nov. 13th), there has been some sort of scandal going on across facebook as a result of the actions of the hacking group “Anonymous”. If the internet were 100 times faster, the obscene material being broadcast by this group would circulate that many times more quickly. While in this particular case, no one is being physically harmed, it’s quite possible that more immediately harmful things could be done via the internet in the near future. Hopefully I don’t sound like some sort of crazy doomsayer, but these things seem pretty feasible to someone who has a basic understanding of how networking works, and I do not want to fall victim to any form of abuse over the internet.

Friday, November 4, 2011

Virtual Reality

Virtual reality, in its most base definition, is the use of computing technology to simulate real events, place, people, or environments. To date, we haven’t really created virtual reality technology like that in Neuromancer, but I’d say we are on our way there. Tron (the original version, not the recent remake) is considered to be the first movie that toyed with the idea of virtual reality, back in 1982. It’s really interesting that in the less than three decades we have even conceived of virtual reality technology, we have come so far. I’d like to talk about a few of the more advanced forms of VR technology currently out there right now.

The US Military uses virtual reality to train people. One of the bigger implementations of virtual reality as this kind of mechanism is its use in aircraft pilot training. The Air Force currently uses flight simulation to give their training pilots a “taste” of what flying is like before they really get into the air. This is potentially lifesaving, in that it can let flight instructors know what weaknesses a trainee may have before they actually get behind the joystick and fly a real plane. The problem with this type of training is that it cannot yet approach the realism of actually flying a plane. Consider yourself in a math class, taking a math test. If you know that test is a practice version, chances are high that you won’t put forth as much effort on it as you would if it were a final exam. The test itself might have the same questions, but the stakes of the test are different. This same principle applies in the context of virtual reality as a flight training mechanism.


Perhaps a less immediately significant use for virtual reality technology is in the gaming industry. We’ve discussed the huge uses of VR technology in gaming in class, but I’d like to explore what’s already out there. As most of you probably know, Xbox 360 and Wii both have motion sensing technology (the Wii is based around it entirely, whereas the Xbox has an optional Kinect feature). One of the biggest limitations of virtual reality technology in gaming is its inability to project images as actual size. Most of us, unfortunately, do not have access to a room that is made up entirely of screens on all four sides, the floor and ceiling. This is what I picture when I think of the future of virtual reality gaming: a room made up of nothing but screens, and a suit for the player of the game to wear. His or her movements will determine how the avatar in-game moves. I’m personally really looking forward to the advent of technology this complex, but I am unsure as to when we can expect it. Sooner than later, hopefully!

Monday, October 24, 2011

Artificial Intelligence

I was doing some reading about artificial intelligence (AI) in conjunction with my reading of Neuromancer, and I stumbled across some really interesting (although arguably frivolous) creations. The first of which, designed by a Vietnamese robotics firms called TOSY, is a ping pong playing robot. Apparently, the robot has undergone two remodels and is now in its third form, TOPIO 3.0. This robot played against humans at a robotics fair in Tokyo recently, and is said to have held its own. This is absolutely amazing. Ping pong is (and I speak from experience) one of those sports that requires insane amounts of coordination, and TOPIO 3.0 was able to return 10 shots in a row playing against a human. One might say this is a waste of money, or a waste of robotics expertise, but if you ask me, this is astounding.
                                                                            
Another such artificially intelligent creation is the famous Jeopardy-playing robot Watson. As we all know, Watson dominated its competition in each and every game of Jeopardy it played, beating out even the stars Ken Jennings and Brad Rutter by large amounts of money. Again, I find this truly fascinating. Upon doing some research, I found that Watson contains 90 IBM servers and 16 TB ram (for those of us who aren’t computer savvy, that’s about 2000 times the amount found in the average high end NC State laptop (most 2011 era laptops have 4 or 8 GB ram standard). This is ridiculous. Apparently Watson can process the data of roughly a million books in a second.

I think that if you consider how much raw data can be interpreted and used by supercomputers in a short amount of time, it will appear inevitable that at some point, computing technology will reach the point where it can interpret things on its own. I have no idea how or when this will happen, but I am interested to see how long it takes for Neuromancer to become even more accurate. I don’t really think that improving artificial intelligence to unheard of levels is that far off; I think it will be sooner than later. Hopefully our version of Turing keeps a strong hold on it, if it turns out to be selfish or malicious. This is one of those things that is really interesting just because no one knows what exactly will happen.

Monday, October 10, 2011

Green Versus Green



As a chemical engineering major, I had previously assumed that members of my prospective profession just create things without having to worry about environmental consequences. Surprisingly, and perhaps disturbingly, this assumption was somewhat correct up until recently. It seems that when chemistry first became a large trade, in which many products were produced more cheaply and efficiently than ever before, environmental concerns were next to none. However, starting in the nineties, some scientists began conducting their chemical analyses and experiments with green motivations. Rather than creating things at any cost, be it mercury tainted water supplies, smoggy atmospheric conditions, or some other pollution, chemists are more and more beginning to go green.

This “green revolution” in the fields of chemistry and chemical engineering makes me wonder what exactly we are sacrificing when we concentrate on one green over the other. Some companies are typically green in the sense that they operate for pure profit and disregard the environment as much as possible; that is, they meet industry requirements, and that’s it. However, the trending thing is to be green in the environmental sense of the word, to take responsibility for their emissions, effluents, and wastes as they are dangerous to the fragile ecosystems in which they are released.

If we were to focus purely on efficiency and profit and completely ignore environmental concerns, would the extra quality of products warrant this decision? Would the products we produced under this mindset even be better than those manufactured under different conditions? These are the questions that will have to be answered in the near future by those of us who make things, and as far as I can tell, every one of us Franklins will have to deal with this dilemma once we reach the workforce in 4 or 5 years. I personally am looking forward to being on the crest of a wave of green. Where that wave eventually breaks I am eager to find out.