Subscribe For Free Updates!

We'll not spam mate! We promise.

Showing posts with label Science and Education. Show all posts
Showing posts with label Science and Education. Show all posts

Wednesday, April 10, 2013

Video game teaches Java programming language to players

They say that one of the most effective ways of teaching someone a skill is to turn it into a game. Well, that’s just what a team at the University of California, San Diego have done with their CodeSpells video game – it teaches its players how to use the Java programming language.

One of the CodeSpells gnomes, that players help using spells written in Java

CodeSpells was developed by a group of graduate students led by computer scientist William Griswold, and is intended for elementary to highschool-aged students. The idea was to develop a method of learning that could be structured by the student in a creative manner, that they would willingly spend hours doing.
Within the universe of the first-person game, the player is a wizard in a land inhabited by gnomes. Because the gnomes have lost their ability to use magic, the player helps them out by casting spells for them. Those spells are written in Java by the player, with some assistance from the game. Along with helping the gnomes, players can also earn merit badges by completing simple quests, that once again require the use of the Java spells.

One of the game's Java-written spells

 The game has been tested on a group of 40 girls aged 10 to 12, who had no prior education in programming. The girls reportedly became quite engrossed in the game, and within an hour had gotten the hang of some basic components of Java, which they used to devise new ways of playing the game.

Students trying their hand at CodeSpells


According to the university, “By the time players complete the game’s first level, they have learned the main components of the Java programming language, such as parameters, for if statements, for loops and while loops, among other skills.”
Griswold and his team plan on conducting some more field tests of the game in schools, but ultimately plan on making it freely available to any educators who wish to use it. Some of the gameplay can be seen in the video below.

Saturday, April 6, 2013

Sensors – Components that make a phone “intelligent”


Sensors play a big role in smartphones, improving its usability to a great extent. The integrated sensors have made these devices more than just a communication device. With these sensors, mobile phones act as a thermostat, GPS, air flow detector, and many other devices. The revolution of sensor has transformed a mobile phone into a the fact complete gadget.



  
Listed below are some useful sensors used in mobile phones:

·         Proximity sensor
·         Accelerometer sensor
·         Ambient light sensor
·         Magnetometer sensor
·         Image sensor
·         Fingerprint sensor
·         Moisture sensor
·         Tactile sensor
·         Temperature sensor

Proximity sensor: A proximity sensor in a mobile phone detects the presence of users’ body and deactivates the display and touch pad of phone when it is brought near the face during a call. What if a screen is active when the user is in a call? Yes, it will send unwanted input and may even cause calls to hang up at the mere touch of your cheek!  To avoid such incidents, proximity sensors are being incorporated in touchscreen phones. The sensor even helps save energy by switching off the display.

Accelerometer sensor: This sensor is incorporated to measure the acceleration of the device. In a mobile phone, a 3-axis accelerometer senses the orientation of the mobile phone and changes the screen orientation accordingly. It lets users switch between the portrait and landscape mode smoothly and neatly. Watching videos and gaming on mobile devices are now much more interesting – thanks to accelerometer sensors. In some phones, this sensor also plays a life-saving role, sensing the rate of change of acceleration resulted from a severe jerk from an accident. The sensor helps users make an automatic call for assistance to pre-assigned number, when they face severe and fatal jerk. 3D accelerometer helps in step recognition and is used for sport applications in a phone. It is also used for tap gesture recognition in the user interface for controlling music player and sport application. Nokia 5500 has 3D accelerometer feature.

Ambient light sensor: Ambient light sensor detects light in the surrounding and adjusts the display backlighting accordingly. The backlight is dimmed down, when the surrounding is dark, while the backlight is intensified, when the surrounding light is very bright. The sensor helps users to see the monitor comfortably, at the same time, reduces the power consumption by optimizing display brightness.

Magnetometer sensor: Magnetometer sensor measures strength, orientation, and direction of magnetic field.  Phones such as HTC HD2, Nokia N97, Nokia E72, Nokia N8, Samsung Wave S8500, Samsung Galaxy S, HTC Hero, and iPhone 4 come with magnetometer. This sensor orients maps automatically towards North, while navigating and helps find direction in an easy way. This sensor is also known by names such as digital compass or e-compass.

Image sensor: Image sensor converts an optical image into an electric signal. There are two types of image sensors: charge-coupled device (CCD) & metal-oxide-semiconductor active pixel sensor (CMOS APS). CMOS APS is mostly used in a mobile phone camera to sense images. CCD is very good for digital imaging and is mainly used in professional, medical, and scientific applications, where there is need of high quality image. Some of the mobile phones also use CCD sensor for sensing images. Spice Mobiles has launched S-1200 with professional CCD sensor.

Fingerprint sensor: Fingerprint sensor is an advanced feature in mobile phones. It captures digital image of fingerprint pattern. The captured image is digitally processed to create biometric template and is used for matching. Motorola has released some Android phones with fingerprint sensor. This sensor is used for security system on phone. 

Moisture sensor: Moisture sensor detects phone’s exposure to moisture. The sensor is embedded in the mobile phone and whenever it comes in contact with moisture, the color of the sensor gets changed. Many phones including iPhones have moisture sensor. You don’t have to be very happy about having a moisture sensor on your handset! The device will stop you from claiming your warranty, when you drop your mobile in water, even if you dry it completely! The warranty of the mobile will be cancelled if the color of the moisture sensor is changed.

Tactile sensor: Tactile sensor is a device that is sensitive to touch, force, and pressure. Capacitive touchscreen phones use touch switch, one of the kinds of tactile sensors. Touch switches detect the presence of finger or hand as well as stylus.

Temperature sensor: iPhone incorporates temperature sensors for device safety. When the temperature sensor indicates that the phone has reached dangerous levels of heat that threatens the internal electronic components of the phone, it shuts down automatically. A warning symbol appears on screen and phone is switched off to get cool down.

Magnetic sensor: Magnetic sensor identifies sensitive element characteristics. In blackberry mobiles, magnetic sensor interacts with the case and deactivates screen and keyboard when the phone is placed in case. It also avoids unwanted input in the device as well as saves the power.

Intelligent sensors have made smartphone a complete device. The business phone and high-end multimedia phones segments are witnessing a trend of incorporation of more and more smart sensors to make the phone more intelligent, increasing its usability to a great extent.  The day is not far when your mobile phone will satisfy all your technical needs. 

Wednesday, April 3, 2013

EWICON bladeless wind turbine generates electricity using charged water droplets

Wind energy may be one of the more sustainable sources of power available, but the spinning blades of conventional wind turbines require regular maintenance and have attracted criticism from bird lovers. That might explain why we've seen wind turbine prototypes that enclose the blades in a chamber or replace them entirely with a disc-like system. But researchers in the Netherlands set out to eliminate the need for a mechanical component entirely and created the EWICON, a bladeless wind turbine with no moving parts that produces electricity using charged water droplets.

Dutch researchers have developed the EWICON, a bladeless windmill with no moving parts that produces electricity by pushing charged water droplets into the wind


Where most wind turbines generate electricity through mechanical energy, the EWICON (short for Electrostatic WInd energy CONvertor) creates potential energy with charged particles – in this case, water droplets. The current design consists of a steel frame holding a series of insulated tubes arranged horizontally. Each tube contains several electrodes and nozzles, which continually release positively-charged water particles into the air. As the particles are blown away, the voltage of the device changes and creates an electric field, which can be transferred to the grid for everyday use.

Energy output would be dependent not only on the wind speed, but also the number of droplets, the amount of charge placed on the droplets, and the strength of the electric field.
According to the developers, the system could easily be installed on land or sea, much like regular wind turbines, but the design is particularly suited to urban areas. Expansive wind farms usually aren't feasible in big cities due to a lack of space, but one or more EWICONs could be incorporated into existing architecture just by altering it's shape. Also, with a lack of moving parts, it would require less maintenance while producing less noise and no flickering shadows.

The designers of the EWICON windmill incorporated it into the sign on top of the Stadstimmerhuis 010 building in Rotterdam


So far, only a few small-scale prototypes of the EWICON have been produced: two that are incorporated into a sign on top of the Stadstimmerhuis 010 building in Rotterdam and another standalone version that was erected on the Delft Technical University campus. The designers are currently testing the device's capabilities, but are trying to gather funding for a larger model that could produce more power.
The EWICON was designed by architecture firm Mecanoo using technology developed by Delft Technical University researchers Johan Smit and Dhiradj Djairam. The video below demonstrates how the EWICON works.

Tuesday, March 12, 2013

GOCE becomes first satellite to detect an earthquake from space

The European Space Agency’s (ESA) Gravity Field and Steady-State Ocean Circulation Explorer (GOCE) satellite was launched on March 17, 2009, as the first of a series of Earth Explorer satellites. Its mission is to capture high-resolution gravity measurements and produce an accurate gravity map – or geoid – of Earth. To increase the resolution of its measurements, GOCE was put into an unusually low orbit, which has also helped it to become the first satellite to sense sound waves from an earthquake from space.
March 11 marks the two-year anniversary of the 2011 Tohoku earthquake and tsunami that devastated the northeastern coast of Japan, but researchers studying past measurements taken by GOCE have only recently discovered that the satellite detected sound waves from the earthquake.

The five meter-long GOCE satellite has been found to have detected the 2011 Tohoku earthquake (Image: ESA/AOES Medialab)        


While earthquakes create seismic waves that travel through the Earth’s layers, large earthquakes can also cause the Earth’s surface to vibrate, producing sound waves that travel upwards through the atmosphere. While these waves measure only centimeters at the Earth’s surface, they expand to measure kilometers in the thin atmosphere at altitudes of 200-300 km.
The low-frequency sound, or infrasound – that reaches these heights causes vertical movements that expand and contract the atmosphere. It is these movements that GOCE was able to detect thanks to its three pairs of accelerometers, which are so precise they can detect accelerations to within one part in 10,000,000,000,000 of Earth’s gravity.
Because GOCE is flying at an altitude of less than 270 km – giving it the lowest altitude of any observation satellite – it passes through the remnants of the atmosphere. To compensate for the drag resulting from changes in the atmosphere and keep GOCE ultra-stable in its low orbit, its ion engine generates carefully calculated thrusts based on measurements provided by the satellite’s accelerometers.

ESA's GOCE satellite detected variations in air density caused by the March 11, 2011 earth...

















Analysis of past thruster and accelerometer data by scientists from the Research Institute in Astrophysics and Planetology in France, the French space agency CNES, the Institute of Earth Physics of Paris and Delft University of Technology in the Netherlands, supported by ESA’s Earth Observation Support to Science Element, revealed that GOCE detected sound waves from the March 11, 2011 earthquake, making it the first satellite to sense an earthquake from space.
Around 30 minutes after the earthquake, GOCE passed through sound waves with the vertical displacement of the surrounding atmosphere detected by its accelerometers, not unlike the way in which seismometers detect seismic waves. GOCE also detected variations in air density.
“With this new tool they (seismologists) can start to look up into space to understand what is going on under their feet,” said Raphael Garcia from the Research Institute in Astrophysics and Planetology.
The ripples in the atmosphere caused by the 2011 Tohoku earthquake and their effect on GOCE can be seen in the following animation.
Source: ESA

Wednesday, February 20, 2013

MIT's new image-processing chip improves digital snapshots


Snapshots banged off on a smartphone, tablet or point-and-shoot camera could soon be getting a lot better looking thanks to a new processor chip. Developed by researchers at MIT’s Microsystems Technology Laboratory, the new chip enhances images within milliseconds, and reportedly uses much less power than the image processing software installed on some devices.

Saturday, January 19, 2013

Friday, January 18, 2013

Using air to charge cellphones?

Using air to charge cellphones? IIT-Delhi does it!

All you need to charge your mobile is -- air!

Students at the Department of Industrial Design at Indian Institute of Technology, Delhi have attached a turbine with a mobile phone that helps charge it even when the user is travelling.

"The electricity generated by the turbine when moved by wind energy could charge a cellphone in an emergency. It generates electricity to the tune of 3 to 4 watts which is sufficient to charge a mobile phone."

The specially designed turbine, which costs about Rs.200 to be developed inside a laboratory, is so small that it could be easily kept in a pocket.

The primary objective of the device is to extend mobile 'connectivity' where there is no electricity. The device also saves energy, though not to a significant extent.

The device is best suited for coastal areas where the wind flows almost continuously.

The technique is not yet commercialized but the department has sent a proposal to the ministry of science and technology to help manufacture the turbine on a large scale.

"The device will help mobile phone users charge their phones while travelling in a bus, a car or a train. All they need to do is -- place the turbine against the wind flow. It will use wind energy to move the turbine thereby generating energy."

The students have also used a spring in the device that can store energy through a handle. It could be used to charge a mobile during power cuts.

Tuesday, October 16, 2012

Magic Finger turns any surface into a touch interface

A trip on public transport or to the local coffee shop might give the impression that touchscreens are everywhere, but scientists at Autodesk Research of the University of Alberta and the University of Toronto are looking to take the ubiquity of touch interfaces to the next level. They are developing a “Magic Finger” that allows any surface to detect touch input by shifting the touch technology from the surface to the wearer’s finger.

The Magic Finger proof-of-concept prototype        


Magic Finger doesn't look like much at the moment. In fact, with its LED light it looks a bit like a child’s attempt at an ET costume prop. It’s a proof-of-concept prototype made up of a little Velcro ring that straps to the wearer’s fingertip with a trail of wires leading to a box of electronics. But what it does blurs the line between the digital and real world.
Three possible wearing positions for the final Magic Finger
On the ring there are a pair of optical sensors. One is a low resolution, high-speed sensor for tracking movement, which wouldn't be very impressive because it makes the device just a mouse you strap to your finger. However, Magic Finger adds a new dimension with a high-resolution camera, which is able to detect 32 different surface textures with 98 percent accuracy. This allows Magic Finger to recognize what it touches, such as leather bag or a table or a magazine page, and with this information turn various surfaces into interfaces for devices or a way of passing information.
Programmed to the wearer’s personal preferences, the Magic Finger can make non-digital objects into digital interfaces. An annoying smartphone ringing can be muted by touching the bag that it’s in. The wearer can start an app or send a saved text message by touching a logo on their shirt. Pinch commands and other gestures can be used for tablet or computer control without actually touching the device, and information can be passed to a receiving device by touching it.
Magic Finger's sensors
Another twist on the Magic Finger concept is its ability to detect “artificial” textures. That is, material textures or patterns that act like QR codes. With this, the wearer can download information from a magazine page by touching it or disposable stickers could be made into remote controls for devices. As a particularly geeky touch, the fingertip camera can even be used as an impromptu periscope.

Magic Finger's high-resolution camera with image examples at various magnifications        


The Autodesk team hopes one day to make the Magic Finger into a self-contained device with appropriate miniaturization. Once reduced in size, the device might be attached to a ring or embedded under a fingernail or in the fingertip itself so that it is invisible and dormant until required.
The video below is a presentation of Magic Finger by Autodesk Research.
Source: Autodesk research via Dvice

Friday, October 5, 2012

"Which programming language should I learn first?"

It will depends,

- To program in an expressive and powerful language: Python
- To get a website up quickly: PHP
- To mingle with programmers who call themselves â��rockstarsâ��: Ruby.
- To really learn to program: C.
- To achieve enlightenment: Scheme.
- To feel depressed: SQL
- To drop a chromosome: Microsoft Visual Basic
- To get a guaranteed, mediocre, but well paying job writing financial applications in a cubicle under fluorescent lights: Java.
- To do the same thing with certifications and letters after your name: C#
- To achieve a magical sense of childlike wonder that you have a hard time differentiating from megalomania: Objective C

Tuesday, October 2, 2012

Infrared technology offers faster wireless data transfer than Wi-Fi and Bluetooth


Back around the turn of the century, infrared ports for wireless data transfer over short distances were commonplace on many mobile devices. But it wasn't long before infrared communication technology was kicked to the curb in favor of the more versatile radio-based Wi-Fi and Bluetooth technologies. Fraunhofer researchers are looking to resurrect infrared wireless data transfer technology with the development of a “multi-gigabit communication module” that can wirelessly transfer data 46 times faster than Wi-Fi and 1,430 times faster than Bluetooth.


The infrared wireless communication module developed at Fraunhofer can transfer data wirelessly at speeds of 1 Gbps (Photo: Fraunhofer IPMS/Jürgen Lösel)         


The new infrared module developed by Frank Deicke, a researcher at the Fraunhofer Institute for Photonic Microsystems (IPMS) in Dresden, boasts a data transfer rate of 1 gigabit per second (Gbps), making it not only significantly faster than conventional Wi-Fi and Bluetooth wireless technologies, but also six times faster than a wired USB 2.0 connection.
The small infrared module developed by Deicke and his colleagues specifically for the wireless transfer of large amounts of video between devices consists of hardware and software components. The hardware includes a transceiver about the size of a child’s fingernail that contains a laser diode to send infrared light pulses and a photo detector to receive them. This optical component is able to send and receive light signals simultaneously.
Because the light signals become weakened and distorted when traveling through the air, the researchers programmed error-correction mechanisms into the module, along with high-speed signal processing to overcome the bottleneck in the encoding of the data before transmission and subsequent decoding at the receiving device. This helps reduce the encoding/decoding load placed on the microprocessors, which helps keep energy consumption down.
As an optical technology, the module still requires a clear line of sight between the communicating devices, but Deicke says this isn't a problem as it was designed for transferring data between two nearby devices, such as a camera or smartphone and a PC or laptop.
Deicke and his team admit that the technology needs to be accepted as standard by manufacturers before it can catch on, which is why he is an active member of the Infrared Data Association (IrDA) and contributes to the “10 Giga-IR working group,” the name of which provides a hint at the planned next step for the technology.
“Our current infrared module has already demonstrated that infrared technology is able to go far beyond established standards,” he says. “We plan to improve performance even more in the future.”
Having already achieved data transfer rates of 3 Gbps with his current model, he hopes that 10 Gbps speeds are not too far away.
Source: Fraunhofer

Wednesday, September 26, 2012

World’s most complex mathematical theory

A Japanese mathematician may have finally cracked ‘the ABC Conjecture’ — one of the world’s most complex mathematical theories.


Shinichi Mochizuki, a scholar at Kyoto University, has released four papers on the Internet describing his proof of what is known as ‘ABC Conjecture’

Experts said he took four years to calculate the theory and if confirmed it would be one of the greatest mathematical achievements of this century.

The ABC Conjecture was first proposed by British mathematician David Masser, working with France’s Joseph Oesterle, in 1985. It was, however, never proven.

It refers to equations of the form a+b=c. It involves the concept of a square-free number, meaning a number that cannot be divided by the square of any number. 
For example: 15 and 17 are square-free numbers, but 16 is not because it is divisible.

From that, the ABC Conjecture concerns a property of the product of the three integers ABC. The conjecture states that for integers a+b=c, the square free part always has a minimum value greater than zero and nearly always greater than 1.

The paper, which is 500 pages long, can be viewed on his website in a series of PDFs labelled "Teichmuller Theory".

Dorian Goldfeld, a mathematician at Columbia University in New York, told Nature magazine that Mochizuki's discovery is "one of the most astounding achievements of mathematics in the 21st century."

Friday, September 21, 2012

World’s most efficient thermoelectric material developed


Approximately 90 percent of the world’s electricity is generated by heat energy. Unfortunately, electricity generation systems operate at around 30 to 40 percent efficiency, meaning around two thirds of the energy input is lost as waste heat. Despite this, the inefficiency of current thermoelectric materials that can convert waste heat to electricity has meant their commercial use has been limited. Now researchers have developed a thermoelectric material they claim is the best in the world at converting waste heat into electricity, potentially providing a practical way to capture some of the energy that is currently lost.



Thermoelectrics can be used to convert energy currently lost as heat wasted from industry and vehicle tailpipes into electricity 


The new material, which is based on the common semiconductor telluride, is environmentally stable and is expected to convert from 15 to 20 percent of waste heat to electricity. The research team, made up of chemists, material scientists and mechanical engineers from Northwestern University and Michigan State University, say the material exhibits a thermoelectric figure of merit (or “ZT”) of 2.2, which they claim is the highest reported to date.
The higher a material’s ZT, the more efficient it is at converting heat to electricity. While there’s no theoretical upper limit to ZT, no known materials exhibit a ZT higher than 3. The researchers believe with a ZT of 2.2, the new material is efficient enough to be used in practical applications and could usher in more widespread adoption of thermoelectrics by industry.
"Our system is the top-performing thermoelectric system at any temperature," said Mercouri G. Kanatzidis, who led the research. "The material can convert heat to electricity at the highest possible efficiency. At this level, there are realistic prospects for recovering high-temperature waste heat and turning it into useful energy."
With the huge potential for thermoelectrics to recover some of the heat energy that is currently lost, they have been the focus of much research that has seen them improve significantly in recent years. So much so that the Mars rover Curiosity features lead telluride thermoelectrics, although its system only has a ZT of 1. BMW is also testing systems to harvest the heat from the exhaust systems and combustion engines of its cars.
Aside from capturing some of the wasted heat energy emitted through a vehicle’s tailpipe, the new material could be used in heavy manufacturing industries, including glass and brick making, refineries, and coal- and gas-fired power plants, and on large ships and tankers, where large combustion engines operate continuously. Such applications are seen as ideal as the waste heat temperatures in these areas can range from 400 to 600 degrees Celsius (750 to 1,100 degrees Fahrenheit),which is the sweet spot for thermoelectrics use.
The team’s paper describing the development of the new material is published in the journal Nature.
Source: Northwestern University

Thursday, August 23, 2012

Oak Ridge develops improved way of extracting uranium from seawater


The world’s estimated reserves of uranium are only 6 million tons and with the growing demand for reliable energy free of greenhouse emissions leading to more and more nuclear plants being built, that supply may not last very long. Some estimates place the time before all the uranium is gone at between 50 and 200 years. However, the oceans of the world contain 4.5 billion tons of uranium dissolved in seawater. That’s enough to last something on the order of 6,500 years. The tricky bit is getting it out, but a team at Oak Ridge National Laboratory, Tennessee has come a step closer to economically extracting uranium from seawater with a new material that is much more efficient than previous methods.
Ever since it was learned how much precious metal is dissolved in seawater, scientists, engineers, visionaries and con men have dreamed of ways to extract it. In the 1920s, popular science editor Hugo Gernsback graced the covers of his magazines with fanciful floating factories hauling giant sheets of gold out of the briny deep. Since the 1960s, almost a dozen nations have studied ways of making the dream a reality. The Japanese have been particularly successful with the the Japan Atomic Energy Research Institute having some success in extracting uranium using mats of woven polymer fibers in 2002, but at a cost three times the market price of the metal at the time. That is the basic problem – you can get the metal out, but it costs more than it’s worth.

A disc of highly enriched uranium from the Y-12 National Security Complex Plant


Now a team at Oak Ridge is working to bring down those costs by devising a more efficient method of extraction. The Oak Ridge team’s approach is based on their examination of how plastic and chemical groups are bound together. From this, they determined that it was possible to enhance the uranium-extracting characteristic of the uranium-loving amidoxime chemical groups in their high-capacity reusable adsorbent, which they combined with a Florida company's high-surface-area polyethylene fibers. These fibers have a small diameter with high surface areas and a variety of shapes. Tailoring the size and shape of the fibers increases their adsorption capacity. The fibers are bombarded with radiation, which react with chemicals that have a high affinity for particular metals. The result is a little uranium sponge.
Using the material, called Hicap, is simply a matter of immersing it in seawater. As it sits in the water, the material grabs on to the uranium ions and deposits them on the surface of its fibers. Once a sufficient amount of uranium is adsorbed, the material is removed and the metal extracted with acid. "We have shown that our adsorbents can extract five to seven times more uranium at uptake rates seven times faster than the world's best adsorbents," said Chris Janke, one of the inventors and a member of Oak Ridge’s Materials Science and Technology Division. HiCap is also reusable as, after the extraction process, it can be regenerated with potassium hydroxide.
The results of the Oak Ridge team were verified by researchers at Pacific Northwest National Laboratory’s Marine Sciences Laboratory in Sequim, Washington, and were presented a last Wednesday’s meeting of the American Chemical Society in Philadelphia. The material is a long way from making uranium as common as pig iron, but it does demonstrate that extracting it from the oceans may no longer be a con man’s dream.

Wednesday, July 18, 2012

Sapphire disks could communicate with future generations 10 million years from now


Storing data for longer than a few years is tricky enough with rapidly advancing technology, so what are you supposed to do if you need to store data for thousands or even millions of years? That's just the problem facing nuclear waste management companies, who need a way to warn future civilizations of hazardous sites that will withstand the test of time. Luckily a recent proposal may have the solution with a sapphire disk etched in platinum that could survive longer than humanity itself.

A sapphire disk etched in platinum could preserve information for future generations to decipher 10 million years from now


Communicating with future generations is tricky for any number of reasons. Aside from finding a medium that will remain intact over the years, there's no way to accurately predict how people of the future will read data, what language they'll use, what technology they'll have at their disposal, or if they'll even be human by then. But when the information being preserved has to do with radioactive waste that can still be dangerous thousands of years from now, it's even more crucial to leave a warning that can be easily interpreted centuries from now.
That's why ANDRA, a French nuclear waste management agency, created a disk made of industrial sapphire that could last millions of years, thanks to the gem's exceptional toughness and resistance to scratching. The prototype was made by taking two thin, 20-cm (7.9-in) wide disks, etching one side of one disk in platinum, and then molecularly fusing them together. One disk made this way costs €25,000 (about US$30,738) and could hold up to 40,000 miniature pages of text or images. Theoretically, the sapphire should preserve the etchings so they can be viewed by any future archaeologists using a microscope. Currently the prototype has been submerged in acid to test its resilience, and ANDRA is confident the results will show the disk could survive for 10 million years.
The sapphire disk is one of many ideas currently being explored by ANDRA, which started a project to bring together material scientists, archaeologists, anthropologists, and other specialists to explore ideas for warning future excavators about hazardous waste sites. Fortunately the team has plenty of time to form a definite solution, since the nuclear waste repositories operating now will most likely not be sealed until the 22nd century. Even with time on their side, the group still hopes to identify all possible solutions to the problem and then narrow them down by 2014 or 2015.
ANDRA's project and sapphire disk may be aimed at managing radioactive waste, but whatever idea it settles on could open the floodgates for preserving even more vital information for the far off future.
Source: Science

Monday, July 2, 2012

Oxygen microcapsules could save lives when patients can't breathe


Six years ago, Dr. John Khier of Boston Children’s Hospital began investigating the idea of using injectable oxygen on patients whose lungs were incapacitated or whose airways were blocked. He was prompted to do so after a young girl that he was caring for passed away – she succumbed to a brain injury, which resulted when severe pneumonia caused her lungs to stop working properly, which in turn caused her blood oxygen levels to drop too low. Now, Khier is reporting that his team has injected gas-filled microparticles into the bloodstreams of oxygen-deprived lab animals, successfully raising their oxygen levels back to normal levels within seconds.

A syringe containing the oxygen microparticle solution



The microparticles are created using a device called a sonicator, which uses high-frequency sound waves to mix lipids (fatty molecules) and oxygen gas together. This results in the mixture forming into particles about two to four micrometers in diameter, each of which consists of an oxygen core surrounded by a lipid outer shell. Because the particles are so small and flexible, they are able to squeeze through capillaries – by contrast, if straight oxygen gas were injected, bubbles of it could block the blood flow and cause embolisms.
The microparticles are combined with a liquid carrier, so they can then be injected into the bloodstream. That suspension contains three to four times the amount of oxygen as regular red blood cells, so relatively small amounts of it are required, depending on how much of an oxygen level boost is required.
When the microparticle solution was tested on lab animals with blocked tracheas, it was able to keep them alive for up to 15 minutes without their taking a single breath, plus it also reduced low-oxygen-related cardiac arrests and organ injuries.
It is intended that the treatment would be used mainly in emergency response scenarios, to hold non-breathing patients over for 15 to 30 minutes – the carrier liquid would overload the bloodstream if used for longer. Khier and his team envision paramedics, emergency clinicians or intensive care personnel keeping supplies of the microparticle solution close at hand and ready to go, should it be needed.
“This is a short-term oxygen substitute—a way to safely inject oxygen gas to support patients during a critical few minutes,” he said. “Eventually, this could be stored in syringes on every code cart in a hospital, ambulance or transport helicopter to help stabilize patients who are having difficulty breathing.”
Although already-available blood substitutes are capable of carrying oxygen, they still first need to be oxygenated by functioning lungs.
A paper on the research was published this Wednesday in the journal Science Translational Medicine.

Sunday, July 1, 2012

10 SCIENTIFIC LAWS AND THEORIES YOU REALLY SHOULD KNOW


Whether we're launching a space shuttle or trying to discover another Earth-like planet, we rely on scientific laws and theories to guide us.

 Scientists have many tools available to them when attempting to describe how nature and the universe at large work. Often they reach for laws and theories first. What's the difference? A scientific law can often be reduced to a mathematical statement, such as E = mc²; it's a specific statement based on empirical data, and its truth is generally confined to a certain set of conditions. For example, in the case of E = mc², c refers to the speed of light in a vacuum.

scientific theory often seeks to synthesize a body of evidence or observations of particular phenomena. It's generally -- though by no means always -- a grander, testable statement about how nature operates. You can't necessarily reduce a scientific theory to a pithy statement or equation, but it does represent something fundamental about how nature works.

Both laws and theories depend on basic elements of the scientific method, such as generating a hypothesis, testing that premise, finding (or not finding) empirical evidence and coming up with conclusions. Eventually, other scientists must be able to replicate the results if the experiment is destined to become the basis for a widely accepted law or theory.

In this article, we'll look at 10 scientific laws and theories that you might want to brush up on, even if you don't find yourself, say, operating a scanning electron microscope all that frequently. We'll start off with a bang and move on to the basic laws of the universe, before hitting evolution. Finally, we'll tackle some headier material, delving into the realm of quantum physics.

10: Big Bang Theory

If you're going to know one scientific theory, make it the one that explains how the universe arrived at its present state. Based on research performed by Edwin Hubble, Georges Lemaitre and Albert Einstein, among others, the big bang theorypostulates that the universe began almost 14 billion years ago with a massive expansion event. At the time, the universe was confined to a single point, encompassing all of the universe's matter. That original movement continues today, as the universe keeps expanding outward.


The big bang theory

The theory of the big bang gained widespread support in the scientific community after Arno Penzias and Robert Wilson discovered cosmic microwave background radiation in 1965. Using radio telescopes, the two astronomers detected cosmic noise, or static, that didn't dissipate over time. Collaborating with Princeton researcher Robert Dicke, the pair confirmed Dicke's hypothesis that the original big bang left behind low-level radiation detectable throughout the universe.

9: Hubble's Law of Cosmic Expansion

Let's stick with Edwin Hubble for a second. While the 1920s roared past and the Great Depression limped by, Hubble was performing groundbreaking astronomical research. Hubble not only proved that there were other galaxies besides the Milky Way, he also discovered that these galaxies were zipping away from our own, a motion he called recession.


Hubble and his famous law helped to quantify the movement of the universe's galaxies.


In order to quantify the velocity of this galactic movement, Hubble proposedHubble's Law of Cosmic Expansion, aka Hubble's law, an equation that states:velocity = H0 × distanceVelocity represents the galaxy's recessional velocity; H0is the Hubble constant, or parameter that indicates the rate at which the universe is expanding; and distance is the galaxy's distance from the one with which it's being compared.

Hubble's constant has been calculated at different values over time, but the current accepted value is 70 kilometers/second per megaparsec, the latter being a unit of distance in intergalactic space. For our purposes, that's not so important. What matters most is that Hubble's law provides a concise method for measuring a galaxy's velocity in relation to our own. And perhaps most significantly, the law established that the universe is made up of many galaxies, whose movements trace back to the big bang.

8: Kepler's Laws of Planetary Motion

For centuries, scientists battled with one another and with religious leaders about the planets' orbits, especially about whether they orbited our sun. In the 16th century, Copernicus put forth his controversial concept of a heliocentric solar system, in which the planets revolved around the sun -- not the Earth. But it would take Johannes Kepler, building on work performed by Tyco Brahe and others, to establish a clear scientific foundation for the planets' movements.


Kepler's law of areas

Kepler's three laws of planetary motion -- formed in the early 17th century -- describe how planets orbit the sun. The first law, sometimes called the law of orbits, states that planets orbit the sun elliptically. The second law, the law of areas, states that a line connecting a planet to the sun covers an equal area over equal periods of time. In other words, if you're measuring the area created by drawing a line from the Earth to the sun and tracking the Earth's movement over 30 days, the area will be the same no matter where the Earth is in its orbit when measurements begin.

The third one, the law of periods, allows us to establish a clear relationship between a planet's orbital period and its distance from the sun. Thanks to this law, we know that a planet relatively close to the sun, like Venus, has a far briefer orbital period than a distant planet, such as Neptune.

7: Universal Law of Gravitation

We may take it for granted now, but more than 300 years ago Sir Isaac Newton proposed a revolutionary idea: that any two objects, no matter their mass, exert gravitational force toward one another. This law is represented by an equation that many high schoolers encounter in physics class. It goes as follows:

F = G × [(m1m2)/r²]

Thanks to Newton's universal law, we can figure out the gravitational force between any two objects.

F is the gravitational force between the two objects, measured in Newtons. M1 andm2 are the masses of the two objects, while r is the distance between them. G is the gravitational constant, a number currently calculated to be 6.672 × 10-11 N m² kg-2.

The benefit of the universal law of gravitation is that it allows us to calculate the gravitational pull between any two objects. This ability is especially useful when scientists are, say, planning to put a satellite in orbit or charting the course of the moon.

6: Newton's Laws of Motion

As long as we're talking about one of the greatest scientists who ever lived, let's move on to Newton's other famous laws. His three laws of motion form an essential component of modern physics. And like many scientific laws, they're rather elegant in their simplicity.

The first of the three laws states an object in motion stays in motion unless acted upon by an outside force. For a ball rolling across the floor, that outside force could be the friction between the ball and the floor, or it could be the toddler that kicks the ball in another direction.

The second law establishes a connection between an object's mass (m) and its acceleration (a), in the form of the equation F = m × aF represents force, measured in Newtons. It's also a vector, meaning it has a directional component. Owing to its acceleration, that ball rolling across the floor has a particular vector, a direction in which it's traveling, and it's accounted for in calculating its force.

Newton's second law of motion

The third law is rather pithy and should be familiar to you: For every action there is an equal and opposite reaction. That is, for every force applied to an object or surface, that object pushes back with equal force.

5: Laws of Thermodynamics

The British physicist and novelist C.P. Snow once said that a nonscientist who didn't know the second law of thermodynamics was like a scientist who had never read Shakespeare. Snow's now-famous statement was meant to emphasize both the importance of thermodynamics and the necessity for nonscientists to learn about it.

Thermodynamics is the study of how energy works in a system, whether it's an engine or the Earth's core. It can be reduced to several basic laws, which Snow cleverly summed up as follows:
  • You can't win.
  • You can't break even.
  • You can't quit the game.
Let's unpack these a bit. By saying you can't win, Snow meant that since matter and energy are conserved, you can't get one without giving up some of the other (i.e., E=mc²). It also means that for an engine to produce work, you have to supply heat, although in anything other than a perfectly closed system, some heat is inevitably lost to the outside world, which then leads to the second law.


The laws of thermodynamics in action

The second statement -- you can't break even -- means that due to ever-increasing entropy, you can't return to the same energy state. Energy concentrated in one place will always flow to places of lower concentration.

Finally, the third law -- you can't quit the game -- refers to absolute zero, the lowest theoretical temperature possible, measured at zero Kelvin or (minus 273.15 degrees Celsius and minus 459.67 degrees Fahrenheit). When a system reaches absolute zero, molecules stop all movement, meaning that there is no kinetic energy, and entropy reaches its lowest possible value. But in the real world, even in the recesses of space, reaching absolutely zero is impossible -- you can only get very close to it.

4: Archimedes' Buoyancy Principle

After he discovered his principle of buoyancy, the ancient Greek scholar Archimedes allegedly yelled out "Eureka!" and ran naked through the city of Syracuse. The discovery was that important. The story goes that Archimedes made his great breakthrough when he noticed the water rise as he got into the tub.

  
Buoyancy keeps everything from rubber ducks to ocean liners afloat.

According to Archimedes' buoyancy principle, the force acting on, or buoying, a submerged or partially submerged object equals the weight of the liquid that the object displaces. This sort of principle has an immense range of applications and is essential to calculations of density, as well as designing submarines and other oceangoing vessels.

3: Evolution and Natural Selection

Now that we've established some of the fundamental concepts of how our universe began and how physics play out in our daily lives, let's turn our attention to the human form and how we got to be the way we are. According to most scientists, all life on Earth has a common ancestor. But in order to produce the immense amount of difference among all living organisms, certain ones had to evolve into distinct species.


A hypothetical (and simplified) example of how natural selection might play out amongst frogs

In a basic sense, this differentiation occurred through evolution, through descent with modification. Populations of organisms developed different traits, through mechanisms such as mutation. Those with traits that were more beneficial to survival such as, a frog whose brown coloring allows it to be camouflaged in a swamp, were naturally selected for survival; hence the term natural selection.

It's possible to expand upon both of these theories at greater length, but this is the basic, and groundbreaking, discovery that Darwin made in the 19th century: that evolution through natural selection accounts for the tremendous diversity of life on Earth.

2: Theory of General Relativity

Albert Einstein's theory of general relativity remains an important and essential discovery because it permanently altered how we look at the universe. Einstein's major breakthrough was to say that space and time are not absolutes and that gravity is not simply a force applied to an object or mass. Rather, the gravity associated with any mass curves the very space and time (often called space-time) around it.

To conceptualize this, imagine you're traveling across the Earth in a straight line, heading east. After a while, if someone were to pinpoint your position on a map, you'd actually be both east and far south of your original position. That's because the Earth is curved. To travel directly east, you'd have to take into account the shape of the Earth and angle yourself slightly north. (Think about the difference between a flat paper map and a spherical globe.)


Einstein's theory of general relativity changed our understanding of the universe.

Space is pretty much the same. For example, to the occupants of the shuttle orbiting the Earth, it can look like they're traveling on a straight line through space. In reality, the space-time around them is being curved by the Earth's gravity (as it would be with any large object with immense gravity such as a planet or a black hole), causing them to both move forward and to appear to orbit the Earth.

Einstein's theory had tremendous implications for the future of astrophysics and cosmology. It explained a minor, unexpected anomaly in Mercury's orbit, showed how starlight bends and laid the theoretical foundations for black holes.

 1: Heisenberg's Uncertainty Principle

Einstein's broader theory of relativity told us more about how the universe works and helped to lay the foundation for quantum physics, but it also introduced more confusion into theoretical science. In 1927, this sense that the universe's laws were, in some contexts, flexible, led to a groundbreaking discovery by the German scientist Werner Heisenberg.


Is it a particle, a wave or both?

In postulating his Uncertainty Principle, Heisenberg realized that it was impossible to simultaneously know, with a high level of precision, two properties of a particle. In other words, you can know the position of an electron with a high degree of certainty, but not its momentum and vice versa.

Niels Bohr later made a discovery that helps to explain Heisenberg's principle. Bohr found that an electron has the qualities of both a particle and a wave, a concept known as wave-particle duality, which has become a cornerstone of quantum physics. So when we measure an electron's position, we are treating it as a particle at a specific point in space, with an uncertain wavelength. When we measure its momentum, we are treating it as a wave, meaning we can know the amplitude of its wavelength but not its location.