Quantcast
Channel: Physics
Viewing all 778 articles
Browse latest View live

This Amazing Video Shows The Largest Structure In The Universe

0
0

universe

A spooky new image shows a web of bright galaxies aligned in the largest structures ever discovered in the universe.

Scientists working with a telescope in Chile discovered the alignment by studying 93 quasars — objects that shine very brightly and are powered by supermassive black holes — from the early universe. The picture (an artist's impression created using data collected by the European Southern Observatory's Very Large Telescope)shows the quasars aligned in a web of blue against the black sea of space.

Earlier studies have found that these quasars are "known to form huge groupings spread over billions of light-years," European Southern Observatory (ESO) representatives said in a statement. The quasars studied by the research team formed when the universe was about 4.6 billion years old, about one-third of the age it is now, ESO added. [Biggest Structure in the Universe Explained (Infographic)]

"The first odd thing we noticed was that some of the quasars' rotation axes were aligned with each other — despite the fact that these quasars are separated by billions of light-years," study leader Damien Hutsemékers, from the University of Liège in Belgium, said in the same ESO statement.

 

largest structure in universe

Hutsemékers and his team also found that the quasars' rotation axes were linked to what is called the large-scale structure of the universe. Previous studies have shown that galaxies are not distributed evenly throughout the universe. Instead, the large star-filled objects clump together in a web, and this is the large-scale structure of the universe, according to ESO.

Scientists working with the Very Large Telescope found that the rotation of the quasars is parallel to the large-scale structures where the galaxies are found.

"The alignments in the new data, on scales even bigger than current predictions from simulations, may be a hint that there is a missing ingredient in our current models of the cosmos," team member Dominique Sluse of the Argelander-Institut für Astronomie in Bonn, Germany and University of Liège, said.

universe

Team members said that the likelihood these results were created by chance is less than 1 percent, according to ESO. The new study is detailed in the Nov. 19 issue of the Journal Astronomy & Astrophysics.

The European Southern Observatory is a collaboration of 15 different countries including France, Brazil and Denmark. ESO is responsible for operating three observing sites in Chile at La Silla, Paranal and Chajnantor. The Very Large Telescope is based in Paranal.

Follow Miriam Kramer @mirikramer. Follow us @Spacedotcom, Facebook and Google+. Original article onSpace.com.

Copyright 2014 SPACE.com, a TechMediaNetwork company. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.

READ MORE: We Asked A NASA Astronaut What His Scariest Moment Was

CHECK OUT:  These Stunning Hubble Images Show Us The Secrets Of The Universe

Join the conversation about this story »


Two New Subatomic Particles Discovered

0
0

CERN large hadron collider

With its first runs of colliding protons in 2008-2013, the Large Hadron Collider has now been providing a stream of experimental data that scientists rely on to test predictions arising out of particle and high-energy physics. In fact, today CERN made public the first data produced by LHC experiments. And with each passing day, new information is released that is helping to shed light on some of the deeper mysteries of the universe.

This week, for example, CERN announced the discovery two new subatomic particles that are part of the baryon family. The particles, known as the Xi_b'- and Xi_b*-, were discovered thanks to the efforts of the LHCb experiment — an international collaboration involving roughly 750 scientists from around the world.

The existence of these particles was predicted by the quark model, but had never been seen before. What's more, their discovery could help scientists to further confirm the Standard Model of particle physics, which is considered virtually unassailable now thanks to the discovery of the Higgs Boson.

Like the well-known protons that the LHC accelerates, the new particles are baryons made from three quarks bound together by the strong force. The types of quarks are different, though: the new X_ib particles both contain one beauty (b), one strange (s), and one down (d) quark. Thanks to the heavyweight b quarks, they are more than six times as massive as the proton.

Large Hadron ColliderHowever, their mass also depends on how they are configured. Each of the quarks has an attribute called "spin"; and in the Xi_b'- state, the spins of the two lighter quarks point in the opposite direction to the b quark, whereas in the Xi_b*- state they are aligned. This difference makes the Xi_b*- a little heavier.

"Nature was kind and gave us two particles for the price of one," said Matthew Charles of the CNRS's LPNHE laboratory at Paris VI University. "The Xi_b'- is very close in mass to the sum of its decay products: if it had been just a little lighter, we wouldn't have seen it at all using the decay signature that we were looking for."

"This is a very exciting result," said Steven Blusk from Syracuse University in New York. "Thanks to LHCb's excellent hadron identification, which is unique among the LHC experiments, we were able to separate a very clean and strong signal from the background,""It demonstrates once again the sensitivity and how precise the LHCb detector is."

Blusk and Charles jointly analyzed the data that led to this discovery. The existence of the two new baryons had been predicted in 2009 by Canadian particle physicists Randy Lewis of York University and Richard Woloshyn of the TRIUMF, Canada's national particle physics lab in Vancouver.

Quark masses as ballsAs well as the masses of these particles, the research team studied their relative production rates, their widths — a measure of how unstable they are — and other details of their decays. The results match up with predictions based on the theory of Quantum Chromodynamics (QCD).

QCD is part of the Standard Model of particle physics, the theory that describes the fundamental particles of matter, how they interact, and the forces between them. Testing QCD at high precision is a key to refining our understanding of quark dynamics, models of which are tremendously difficult to calculate.

"If we want to find new physics beyond the Standard Model, we need first to have a sharp picture," said LHCb's physics coordinator Patrick Koppenburg from Nikhef Institute in Amsterdam. "Such high precision studies will help us to differentiate between Standard Model effects and anything new or unexpected in the future."

The measurements were made with the data taken at the LHC during 2011-2012. The LHC is currently being prepared — after its first long shutdown — to operate at higher energies and with more intense beams. It is scheduled to restart by spring 2015.

The research was published online yesterday on the physics preprint server arXiv and have been submitted to the scientific journal Physical Review Letters.

Further Reading: CERN, LHCb

SEE ALSO: 3 Researchers Just Won $3 Million For Their Game-Changing Physics Finding

Join the conversation about this story »

The Best Strategy For Surviving A Lightning Strike Goes Against Everything You've Ever Thought

0
0

If you get caught in the middle of a big open field during a lightning storm, which of the following uniforms will be most likely to keep you safe — a thick wetsuit, a Superman costume, a medieval coat of armor or your birthday suit?

The answer will surprise you.

Created by Henry Reich

Production team: Alex Reich, Henry Reich, Emily Elert, Kate Yoshida, Ever Salazar, Peter Reich

Music by Nathaniel Schroeder: soundcloud.com/drschroeder

Follow MinuteEarth: On YouTube

Join the conversation about this story »

Earth Is Surrounded By An 'Invisible Shield' 7,200 Miles Away That Protects Our Satellites

0
0

MIT Plasma Shield

Some 7,200 miles above Earth, an invisible shield cloaking our planet is helping to protect us from damaging, super-fast "killer" electrons, scientists have found. This Star Trek style shield stops these whizzing electrons in their tracks, preventing them from harming astronauts and frying our satellites.

As described in the journal Nature, this invisible barrier is located within the Van Allen radiation belts. These are two doughnut-shaped rings around our planet that extend up to 40,000 kilometers above Earth. The inner zone is full of high-energy protons, whereas the outer zone is dominated by high-energy electrons.

The protective shield was discovered after scientists from the University of Colorado Boulder analyzed almost two years of data gathered by the twin NASA spacecraft, the Van Allen Probes, which orbit the rings to observe the behavior of high-energy electrons in this area.

The data revealed a sharp boundary at the very inner edge of the outer belt that appeared to be deflecting incoming highly charged electrons, called ultrarelativistic electrons. These particles whizz around Earth at near light-speed, travelling at approximately 160,000 kilometers per second. It was assumed that these electrons would make a smooth transition, gradually drifting into the upper atmosphere before being destroyed by collisions with air molecules. However, much to their surprise, a sharp cutoff was observed instead.

"It's almost like these electrons are running into a glass wall in space," lead author Professor Daniel Baker said in a news-release. "Somewhat like the shields created by force fields on Star Trek that were used to repel alien weapons, we are seeing an invisible shield blocking these electrons. It's an extremely puzzling phenomenon."

To attempt to identify this enigmatic force field, the researchers examined several different scenarios that could generate and maintain a barrier of this kind. They considered that the Earth's magnetic field lines could be somehow trapping the electrons in place, or that radio signals from human devices on Earth could be somehow dispersing the electrons. But with what they were seeing, neither of these explanations made sense.

Instead, they suggest that a cloud of cold, electrically charged gas known as the plasmasphere could be playing a role. This giant cloud starts just 600 miles above Earth but stretches thousands of miles into the outer, high-energy electron dominated zone of the Van Allen belt. They propose that low frequency electromagnetic waves within the cloud which produce a phenomenon known as "plasmaspheric hiss" could be scattering the electrons at the boundary.

However, the team doesn't think that the story ends there, and expects to find more pieces to the puzzle in the future. "I think the key here is to keep observing the region in exquisite detail,"said Baker, "which we can do because of the powerful instruments on the Van Allen probes."

[Via University of Colorado Boulder, Nature, Tech Times and LA Times]

Read this next: Why Vultures Don't Get Food Poisoning

READ MORE:  This Cheetah Robot Is 'A Ferrari In The Robotics World'

SEE ALSO: Mind-Blowing Images Of Earth From Space

Join the conversation about this story »

Incredible Movies Show Light In Slow-Motion

0
0

light

As imaged by the new compressed ultra-fast photography camera

There's a new, ultra-fast camera that lets engineers take videos of phenomena including the movement of laser light and even faster-than-light phenomena. Watching the videos is like seeing a physics textbook come to life.

Here's a video of a laser reflecting off a mirror:

Here's laser light moving through air, then resin. Notice how the light refracts and changes its path slightly when it enters the resin:

The new camera is called a compressed ultra-fast photography or CUP camera. It uses compressed sensing, which allows the camera to make better pictures from gathering less data. The camera works at up to 100 billion frames a second, compared to the millions-of-frames-per-second speed of previous ultra-fast cameras.

This isn't the fastest ultra-fast camera ever invented--in 2011, researchers built a one trillion frames-per-second camera--but that one only works on repeating phenomena, because it needs to take several pictures of similar events and then piece together what it sees. The CUP camera works on fleeting, non-repeating events.

CUP cameras could help engineers see new things in optical communications and quantum mechanics. For example, they could visualize how light bends around real-life invisibility devices and watch light oscillating in super-thin spaces, Dartmouth College engineer Brian Pogue wrote in an essay in the journal Nature. Nature also published a paper today describing how the CUP technique works.

This article originally appeared on Popular Science

Popular Science Logo

SEE ALSO: 23 Geek-Worthy Science Gifts

CHECK OUT: One Of The Most Beautiful Places In America Is Being Destroyed, And The Government Is In Denial

Join the conversation about this story »

This Physicist Has A Groundbreaking Idea About Why Life Exists

0
0

a

Why does life exist?

Popular hypotheses credit a primordial soup, a bolt of lightning and a colossal stroke of luck.

But if a provocative new theory is correct, luck may have little to do with it. Instead, according to the physicist proposing the idea, the origin and subsequent evolution of life follow from the fundamental laws of nature and “should be as unsurprising as rocks rolling downhill.”

From the standpoint of physics, there is one essential difference between living things and inanimate clumps of carbon atoms: The former tend to be much better at capturing energy from their environment and dissipating that energy as heat. 

Jeremy England, a 31-year-old assistant professor at the Massachusetts Institute of Technology, has derived a mathematical formula that he believes explains this capacity. The formula, based on established physics, indicates that when a group of atoms is driven by an external source of energy (like the sun or chemical fuel) and surrounded by a heat bath (like the ocean or atmosphere), it will often gradually restructure itself in order to dissipate increasingly more energy. This could mean that under certain conditions, matter inexorably acquires the key physical attribute associated with life.

Screen Shot 2014 12 08 at 4.28.31 PM“You start with a random clump of atoms, and if you shine light on it for long enough, it should not be so surprising that you get a plant,” England said.

England’s theory is meant to underlie, rather than replace, Darwin’s theory of evolution by natural selection, which provides a powerful description of life at the level of genes and populations. “I am certainly not saying that Darwinian ideas are wrong,” he explained. “On the contrary, I am just saying that from the perspective of the physics, you might call Darwinian evolution a special case of a more general phenomenon.”

His idea, detailed in a paper and further elaborated in a talk he is delivering at universities around the world, has sparked controversy among his colleagues, who see it as either tenuous or a potential breakthrough, or both.

England has taken “a very brave and very important step,” said Alexander Grosberg, a professor of physics at New York University who has followed England’s work since its early stages. The “big hope” is that he has identified the underlying physical principle driving the origin and evolution of life, Grosberg said.

“Jeremy is just about the brightest young scientist I ever came across,” said Attila Szabo, a biophysicist in the Laboratory of Chemical Physics at the National Institutes of Health who corresponded with England about his theory after meeting him at a conference. “I was struck by the originality of the ideas.”

Others, such as Eugene Shakhnovich, a professor of chemistry, chemical biology and biophysics at Harvard University, are not convinced. “Jeremy’s ideas are interesting and potentially promising, but at this point are extremely speculative, especially as applied to life phenomena,” Shakhnovich said.

England’s theoretical results are generally considered valid. It is his interpretation — that his formula represents the driving force behind a class of phenomena in nature that includes life — that remains unproven. But already, there are ideas about how to test that interpretation in the lab.

“He’s trying something radically different,” said Mara Prentiss, a professor of physics at Harvard who is contemplating such an experiment after learning about England’s work. “As an organizing lens, I think he has a fabulous idea. Right or wrong, it’s going to be very much worth the investigation.”

Screen Shot 2014 12 08 at 4.30.03 PMAt the heart of England’s idea is the second law of thermodynamics, also known as the law of increasing entropy or the “arrow of time.” Hot things cool down, gas diffuses through air, eggs scramble but never spontaneously unscramble; in short, energy tends to disperse or spread out as time progresses. Entropy is a measure of this tendency, quantifying how dispersed the energy is among the particles in a system, and how diffuse those particles are throughout space. It increases as a simple matter of probability: There are more ways for energy to be spread out than for it to be concentrated.

Thus, as particles in a system move around and interact, they will, through sheer chance, tend to adopt configurations in which the energy is spread out. Eventually, the system arrives at a state of maximum entropy called “thermodynamic equilibrium,” in which energy is uniformly distributed. A cup of coffee and the room it sits in become the same temperature, for example.

As long as the cup and the room are left alone, this process is irreversible. The coffee never spontaneously heats up again because the odds are overwhelmingly stacked against so much of the room’s energy randomly concentrating in its atoms.

Although entropy must increase over time in an isolated or “closed” system, an “open” system can keep its entropy low — that is, divide energy unevenly among its atoms — by greatly increasing the entropy of its surroundings. In his influential 1944 monograph “What Is Life?” the eminent quantum physicist Erwin Schrödinger argued that this is what living things must do. A plant, for example, absorbs extremely energetic sunlight, uses it to build sugars, and ejects infrared light, a much less concentrated form of energy. The overall entropy of the universe increases during photosynthesis as the sunlight dissipates, even as the plant prevents itself from decaying by maintaining an orderly internal structure.

Life does not violate the second law of thermodynamics, but until recently, physicists were unable to use thermodynamics to explain why it should arise in the first place. In Schrödinger’s day, they could solve the equations of thermodynamics only for closed systems in equilibrium. In the 1960s, the Belgian physicist Ilya Prigogine made progress on predicting the behavior of open systems weakly driven by external energy sources (for which he won the 1977 Nobel Prize in chemistry). But the behavior of systems that are far from equilibrium, which are connected to the outside environment and strongly driven by external sources of energy, could not be predicted.

This situation changed in the late 1990s, due primarily to the work of Chris Jarzynski, now at the University of Maryland, and Gavin Crooks, now at Lawrence Berkeley National Laboratory. Jarzynski and Crooks showed that the entropy produced by a thermodynamic process, such as the cooling of a cup of coffee, corresponds to a simple ratio: the probability that the atoms will undergo that process divided by their probability of undergoing the reverse process (that is, spontaneously interacting in such a way that the coffee warms up). As entropy production increases, so does this ratio: A system’s behavior becomes more and more “irreversible.” The simple yet rigorous formula could in principle be applied to any thermodynamic process, no matter how fast or far from equilibrium. “Our understanding of far-from-equilibrium statistical mechanics greatly improved,” Grosberg said. England, who is trained in both biochemistry and physics, started his own lab at MIT two years ago and decided to apply the new knowledge of statistical physics to biology.

Using Jarzynski and Crooks’ formulation, he derived a generalization of the second law of thermodynamics that holds for systems of particles with certain characteristics: The systems are strongly driven by an external energy source such as an electromagnetic wave, and they can dump heat into a surrounding bath. This class of systems includes all living things. England then determined how such systems tend to evolve over time as they increase their irreversibility. “We can show very simply from the formula that the more likely evolutionary outcomes are going to be the ones that absorbed and dissipated more energy from the environment’s external drives on the way to getting there,” he said. The finding makes intuitive sense: Particles tend to dissipate more energy when they resonate with a driving force, or move in the direction it is pushing them, and they are more likely to move in that direction than any other at any given moment.

“This means clumps of atoms surrounded by a bath at some temperature, like the atmosphere or the ocean, should tend over time to arrange themselves to resonate better and better with the sources of mechanical, electromagnetic or chemical work in their environments,” England explained.

Screen Shot 2014 12 08 at 4.31.10 PMSelf-replication (or reproduction, in biological terms), the process that drives the evolution of life on Earth, is one such mechanism by which a system might dissipate an increasing amount of energy over time.

As England put it, “A great way of dissipating more is to make more copies of yourself.”

In a September paper in the Journal of Chemical Physics, he reported the theoretical minimum amount of dissipation that can occur during the self-replication of RNA molecules and bacterial cells, and showed that it is very close to the actual amounts these systems dissipate when replicating.

He also showed that RNA, the nucleic acid that many scientists believe served as the precursor to DNA-based life, is a particularly cheap building material. Once RNA arose, he argues, its “Darwinian takeover” was perhaps not surprising.

The chemistry of the primordial soup, random mutations, geography, catastrophic events and countless other factors have contributed to the fine details of Earth’s diverse flora and fauna. But according to England’s theory, the underlying principle driving the whole process is dissipation-driven adaptation of matter.

This principle would apply to inanimate matter as well. “It is very tempting to speculate about what phenomena in nature we can now fit under this big tent of dissipation-driven adaptive organization,” England said. “Many examples could just be right under our nose, but because we haven’t been looking for them we haven’t noticed them.”

Scientists have already observed self-replication in nonliving systems. According to new research led by Philip Marcus of the University of California, Berkeley, and reported in Physical Review Letters in August, vortices in turbulent fluids spontaneously replicate themselves by drawing energy from shear in the surrounding fluid. And in a paper in Proceedings of the National Academy of Sciences, Michael Brenner, a professor of applied mathematics and physics at Harvard, and his collaborators present theoretical models and simulations of microstructures that self-replicate. These clusters of specially coated microspheres dissipate energy by roping nearby spheres into forming identical clusters. “This connects very much to what Jeremy is saying,” Brenner said.

Besides self-replication, greater structural organization is another means by which strongly driven systems ramp up their ability to dissipate energy. A plant, for example, is much better at capturing and routing solar energy through itself than an unstructured heap of carbon atoms. Thus, England argues that under certain conditions, matter will spontaneously self-organize. This tendency could account for the internal order of living things and of many inanimate structures as well. “Snowflakes, sand dunes and turbulent vortices all have in common that they are strikingly patterned structures that emerge in many-particle systems driven by some dissipative process,” he said. Condensation, wind and viscous drag are the relevant processes in these particular cases.

“He is making me think that the distinction between living and nonliving matter is not sharp,” said Carl Franck, a biological physicist at Cornell University, in an email. “I’m particularly impressed by this notion when one considers systems as small as chemical circuits involving a few biomolecules.”

Screen Shot 2014 12 08 at 4.32.30 PMEngland’s bold idea will likely face close scrutiny in the coming years.

He is currently running computer simulations to test his theory that systems of particles adapt their structures to become better at dissipating energy. The next step will be to run experiments on living systems.

Prentiss, who runs an experimental biophysics lab at Harvard, says England’s theory could be tested by comparing cells with different mutations and looking for a correlation between the amount of energy the cells dissipate and their replication rates.

“One has to be careful because any mutation might do many things,” she said. “But if one kept doing many of these experiments on different systems and if [dissipation and replication success] are indeed correlated, that would suggest this is the correct organizing principle.”

Brenner said he hopes to connect England’s theory to his own microsphere constructions and determine whether the theory correctly predicts which self-replication and self-assembly processes can occur — “a fundamental question in science,” he said.

Having an overarching principle of life and evolution would give researchers a broader perspective on the emergence of structure and function in living things, many of the researchers said. “Natural selection doesn’t explain certain characteristics,” said Ard Louis, a biophysicist at Oxford University, in an email. These characteristics include a heritable change to gene expression called methylation, increases in complexity in the absence of natural selection, and certain molecular changes Louis has recently studied.

If England’s approach stands up to more testing, it could further liberate biologists from seeking a Darwinian explanation for every adaptation and allow them to think more generally in terms of dissipation-driven organization. They might find, for example, that “the reason that an organism shows characteristic X rather than Y may not be because X is more fit than Y, but because physical constraints make it easier for X to evolve than for Y to evolve,” Louis said.

“People often get stuck in thinking about individual problems,” Prentiss said.  Whether or not England’s ideas turn out to be exactly right, she said, “thinking more broadly is where many scientific breakthroughs are made.”

Emily Singer contributed reporting.

Join the conversation about this story »

An Amazing New Injectable Gel Could Save Lives On The Battlefield

0
0

blackfive.net injury soldier thumbs up

Blood, as the vital and life-sustaining passageway through the body, has a strong incentive to flow as easily as possible — except when somebody's badly injured.

With a cut or larger bleeding injury, platelets rush to the opening, part of the body’s desperate attempts to cut off what had moments ago flowed freely.

That switch takes time, and as good as platelets are, in trauma situations or on the battlefield they might not work fast enough unaided.

Now, a team of scientists at Texas A&M, Harvard, and MIT just developed an injectable gel that uses synthetic nanoplatelets to stanch the bleeding.

Described in the journal ACS Nano, the gel carries within it specially made platelets between 20 to 30 nanometers in diameter but only about 1 nanometer thick.

This thin shape makes them virtually 2-dimensional. The edges of these synthetic nanoplatelets are positively charged while the tops and bottoms of the nanoplatelets are negative.

The edges attract to the opposite-charged tops and bottoms of other nanoplatelets, making a layer of nanoscale scales — or the world's tiniest shield phalanx — when injected in the source of bleeding.

injectable self healing hydrogelAnother challenge in making a coagulating aid is ensuring that it seals the wound and nothing else, and that the body can flush it out when it's no longer needed. This biodegradable gel puts synthetic platelets at the wound, lets them build a structure while working with the body’s own healing factors and then remains safely in place.

The hydrogel and platelet mixture reduced clotting time by up to 77 percent when compared with the blood’s natural clotting abilities. In animal testing, all subjects treated with the gel survived a 28-day period without hemorrhaging from their old wounds.

The gel is also totally biodegradable, which means that it can be absorbed into the wound once it has served its purpose, eliminating any need for surgery to remove it.

Biomedical engineer Akhilesh Gaharwar, lead author on the study, says he envisions soldiers carrying this material into battle in pre-loaded syringes, allowing for life-saving coagulation after suffering a wound when help is hard to find. 

“Maybe in the next five to seven years, we can expect to see syringes full of this hydrogel that can be included in a first-aid box,” Garharwar tells Popular Science. “Apart from out-of-hospital, emergency situations, we expect that this technology can be used in operation theater."

injectable self healing hydrogel battlefieldGaharwar isn’t alone in hoping for that outcome. The US Army Research Office supported this study and it’s easy to see why they would want a fast-acting battlefield coagulant.

"What excites me about this hydrogel, I know we tried foams that were light and washed away, but hydrogel is very dense," John Steinbaugh, a former Army Special Operations medic, tells Popular Science. "And what I understand about the technology is the gel is dense enough inside a wound cavity that the pressure of the blood’s not going to wash the gel away. It’ll allow it to get down in the wound, conform to the wound cavity. It looks very promising."

Since his time in the service, Steinbaugh's helped develop the XStat, a special syringe full of sponges designed to stop battlefield wounds from bleeding.

Steinbaugh’s chief reservation about the gel, however, is how well it’ll work in cold environments like battlefields in the mountains of Afghanistan. "So if I’m dressing a wound and it’s 15 degrees outside, is that going to affect the gel? Is the gel going to be frozen in a block where it can’t be injected in the body? That’s a downside. If it can’t be used below zero that’s a down," he says.

But just because the obstacles exist doesn’t mean they’re insurmountable. Steinbaugh concluded: “I think those are all things those guys are already thinking about, and they’re going to solve.”

Joe Landolina also has some experience with making gels to treat wounds. His company, Suneris, makes VetiGel, another gel designed to go on wounds and stop bleeding. Instead of nanoplatelets, VetiGel creates a mesh on the site. Here’s how Landolina describes it:

What we have is a polymer made up of very small plant-derived analogues of the extracellular matrix. What that means is that the gel itself, when it goes on the tissue, it reassembles into a mechanical barrier, reassembles into a mesh. That mesh triggers a body to produce fiber very rapidly. In less than 10 seconds you can stop anything from arterial bleeding to a traumatic laceration of the liver, to even smaller day-to-day procedures.

injectable self healing hydrogel nanoparticles silicate medicineWhat does the future hold for this hydrogel? "The next step in developing this hydrogel is to make it 'bioactive,'" says Gaharwar.

In current form, the hydrogel can stop the bleeding, but cannot initiate the healing process. We are currently designing strategies to develop bio-responsive hydrogel. We expect that after the hydrogel is able to stop the bleeding, it should initiate the healing of the damaged tissues. The first step in the healing process is invasion of the vascular network. We are incorporating signaling molecules within hydrogel to attract blood vessels (vascularization). The vascularization of the damaged tissues will significantly aid in the tissue's regeneration and healing process.

"I would say from this paper it’s very early stage," Landolina said, when asked about this new hydrogel. "What we know in the hemostatic industry is that what works on a rat doesn’t always work when you scale up. That being said, what we’re working on is a gel, so we firmly believe that a gel or a technology that can conform to the wound, whether that’s a gel or some other modality, is the ideal type of hemostatic agent."

So in the future, when combat medics scramble to stop a battle wound, they won't just be looking for a way to salvage the situation. They'll be reaching for a salve.

This article originally appeared on Popular Science

Popular Science Logo

SEE ALSO: The Navy is developing next-generation drones that can land on aircraft carriers.

Join the conversation about this story »

Researchers Have Teleported Light 15 Miles — The Furthest Yet

0
0

teleport

A new distance record has been set in the strange world of quantum teleportation.

In a recent experiment, the quantum state (the direction it was spinning) of a light particle instantly traveled 15.5 miles (25 kilometers) across an optical fiber, becoming the farthest successful quantum teleportation feat yet.

Advances in quantum teleportation could lead to better Internet and communication security, and get scientists closer to developing quantum computers.

About five years ago, researchers could only teleport quantum information, such as which direction a particle is spinning, across a few meters. Now, they can beam that information across several miles. [Twisted Physics: 7 Mind-Blowing Findings]

Quantum teleportation doesn't mean it's possible for a person to instantly pop from New York to London, or be instantly beamed aboard a spacecraft like in television's "Star Trek." Physicists can't instantly transport matter, but they can instantly transport information through quantum teleportation. This works thanks to a bizarre quantum mechanics property called entanglement.

Quantum entanglement happens when two subatomic particles stay connected no matter how far apart they are. When one particle is disturbed, it instantly affects the entangled partner. It's impossible to tell the state of either particle until one is directly measured, but measuring one particle instantly determines the state of its partner.

In the new, record-breaking experiment, researchers from the University of Geneva, NASA's Jet Propulsion Laboratory and the National Institute of Standards and Technology used a superfast laser to pump out photons. Every once in a while, two photons would become entangled. Once the researchers had an entangled pair, they sent one down the optical fiber and stored the other in a crystal at the end of the cable. Then, the researchers shot a third particle of light at the photon traveling down the cable. When the two collided, they obliterated each other.

Though both photons vanished, the quantum information from the collision appeared in the crystal that held the second entangled photon.

Going the distance

Quantum ComputerQuantum information has already been transferred dozens of miles, but this is the farthest it's been transported using an optical fiber, and then recorded and stored at the other end.

Other quantum teleportation experiments that beamed photons farther used lasers instead of optical fibers to send the information.

But unlike the laser method, the optical-fiber method could eventually be used to develop technology like quantum computers that are capable of extremely fast computing, or quantum cryptography that could make secure communication possible.

Physicists think quantum teleportation will lead to secure wireless communication — something that is extremely difficult but important in an increasingly digital world. Advances in quantum teleportation could also help make online banking more secure.

The research was published Sept. 21 in the journal Nature Photonics.

Follow Kelly Dickerson on Twitter. Follow us @livescience, Facebook& Google+. Original article on Live Science.

Copyright 2014 LiveScience, a TechMediaNetwork company. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.

CHECK OUT: 23 Geek-Worthy Science Gifts

SEE ALSO: Here's What Would Happen If We Nuked The Moon

Join the conversation about this story »


These Two Gigantic Stars Are About To Merge Into One Supermassive Star

0
0

star supermassive merger

In the Giraffe constellation 13,000 light-years away, MY Camelopardalis is a massive binary system made up of two blue (that is, very hot and very bright) stars. They're so close, they're about to merge into a supermassive star — a process no one has ever seen before.

Even though MY Cam is the first known example of a supermassive merger progenitor, astronomers studying the system say that most massive stars are created through mergers with smaller ones. The findings were published in Astronomy & Astrophysics last week.

Stars that move alone like our sun are the minority. Most stars in our galaxy were formed in binary or multiple systems, where they're tied by gravity to a companion star. In some of these systems, the stars might appear to eclipse one another if their orbital planes face Earth. For that reason, MY Cam was thought to be a single star up until a decade ago.

Using observations from the Calar Alto Observatory in Spain, a team led by Javier Lorenzo from the University of Alicante found that the eclipsing binary MY Cam is made up of one star that's 38 times the mass of our sun, and another that's 32 solar masses. The two jumbo stars are very close together: Their orbital period is just under 1.2 days, making it the shortest orbital period known for these types of stars. In order to complete a full turn so quickly, the stars must be in extremely close contact (pictured above) — so close that they're actually touching and their outer layer material are mixing together in what's known as a common envelope.

The members of this contact binary are moving around each other at a speed of over one million kilometers an hour. Additionally, the tidal forces in between make them rotate about themselves in just over a day—almost like Earth, except they each have a radius that’s 700 times bigger. The sun, by comparison, makes a full turn once every 26 days.

Not only is MY Cam the most massive eclipsing binary, it's also the most massive binary with components so young they haven't even begun to evolve, according to a news release. The stars are less than two million years old, National Geographic explains, and they were probably formed as we see them today. The researchers expect the two will merge into a single object that’s over 60 solar masses before either of them have had the time to evolve significantly.

MY Cam sits at the end of the hind-legs of the Giraffe, and if you’re in the northern hemisphere, you could probably see it using just binoculars pointed between Ursa Major and Cassiopeia.

NOW READ: NASA Is Going To Let The Hubble Space Telescope Burn Up

CHECK OUT: Scientists Have Figured Out What Color The Universe Is

Join the conversation about this story »

Physics And Chemistry Explain All The Different Shapes Of Snowflakes

0
0

In the Northern Hemisphere at least, the idealised vision of Christmas involves snow.

Whilst no one snowflake is exactly the same as another, at least on a molecular level, scientists have none-the-less devised a system of classification for the many types of crystals that snow can form. This graphic shows the shapes and names of some of the groups of this classification (click on it for a larger version).

snowflake

You might wonder what the shapes of snowflakes have to do with chemistry. Actually, the study of crystal structures of solids has its own discipline, crystallography, which allows us to determine the arrangement of atoms in these solids.

Crystallography works by passing X-rays through the sample, which are then diffracted as they pass through by the atoms contained therein. Analysis of the diffraction pattern allows the structure of the solid to be discerned; this technique was used by Rosalind Franklin to photograph the double helix arrangement of DNA prior to Watson & Crick's confirmation of its structure.

Back to snow crystals: the shapes they form are very dependent on temperature and humidity. This diagram illustrates this fact: simpler shapes are more common at low humidities, whilst more complex varieties of crystal are formed at high humidities. We still don't know the precise variables behind the formation of particular shapes, although researchers are continually working on theoretical equations to predict snowflake shapes.

winter snowing snowflake tongueThe number of categories snow crystals can be categorised into has been steadily increasing over the years. In early studies in the 1930s, they were classified into 21 different shape-based categories; in the 1950s, this was expanded into 42 categories, in the 1960s to 80 categories, and most recently in 2013 to a staggering 121 categories.

This latest study splits the classification into three sub-levels: general, intermediate, and elementary. The graphic featured here shows the 39 intermediate categories, which themselves can be grouped into 8 general categorisations. Each of the intermediate categories have specific characteristics, which are detailed at length in the research paper this graphic is based on.

The eight intermediate categories shown in the graphic are:

  • Column crystals.
  • Plane crystals.
  • Combination of column & plane crystals.
  • Aggregation of snow crystals.
  • Rimed snow crystals.
  • Germs of ice crystals.
  • Irregular snow particles.
  • Other solid precipitation.

There's a lot more out there on snowflake structure than described here; if you want to read in much more detail, check out some of the links below. If you'd rather just see some amazing macro images of snowflakes, then check out the photos of Russian photographer Alexey Kljatov.

CHECK OUT: 23 Geek-Worthy Science Gifts

SEE ALSO: Here's The Crazy Physics You Need To Know To Understand 'Interstellar'

Join the conversation about this story »

How To Predict Dangerous Solar Flares

0
0

solar flare

A couple of months ago, the sun sported the largest sunspot we've seen in the last 24 years.

This monstrous spot, visible to the naked eye (that is, without magnification, but with protective eyewear of course), launched more than 100 flares.

The number of the spots on the sun ebbs and flows cyclically, every 11 years. Right now, the sun is in the most active part of this cycle: we're expecting lots of spots and lots of flares in the coming months.

Usually, the media focuses on the destructive power of solar flares— the chance that, one day, a huge explosion on the sun will fling a ton of energetic particles our way and fry our communication satellites. But there's less coverage on how we forecast these things, like the weather, so that we can prevent any potential damage.

How do you forecast a solar flare, anyway?

One way is to use machine learning programs, which are a type of artificial intelligence that learns automatically from experience. These algorithms gradually improve their mathematical models every time new data come in.

In order to learn properly, however, the algorithms require large sums of data. Scientists lacked any solar data on this scale before the 2010 launch of the Solar Dynamics Observatory (SDO), a sun-watching satellite that downlinks about a terabyte and a half of data every day—more than the most data of any other satellite in NASA history.

Explore an interactive graphic showing where on the sun flares of different classes have been sighted over the years: Click image below to see interactive version on Scientific American.

Solar Flares

Solar flares are notoriously complex. They occur in the solar atmosphere, above surface-dwelling sunspots. Sunspots, which generally come in pairs, act like bar magnets — that is, one spot acts like a north pole and the other like a south.

Given that there are lots of sunspots, that various layers on the sun are rotating at different speeds, and that the sun itself has a north and south pole, the magnetic field in the solar atmosphere gets pretty messy. Like a rubber band, a really twisted magnetic field will eventually snap—and release a lot of energy in the process. That's a solar flare. But sometimes twisted fields don't flare, sometimes flares come from fairly innocuous-looking sunspots, and sometimes huge sunspots never do a thing.

We don't understand the physics of how solar flares occur. We have ideas — we know flares are certainly magnetic in nature—but we don't really know how they release so much energy so fast. In the absence of a definitive physical theory, the best hope for forecasting solar flares lies in scrutinizing our vast data set for observational clues.

There are two general ways to forecast solar flares: numerical models and statistical models. In the first case, we take the physics that we do know, code up the equations, run them over time, and get a forecast. In the second, we use statistics.

We answer questions like: What's the probability that an active region that's associated with a huge sunspot will flare compared with one that's associated with a small sunspot? As such, we build large data sets, full of features—such as the size of a sunspot, or the strength of its magnetic field—and look for relationships between these features and solar flares.

Machine learning algorithms can help to this end. We use machine learning algorithms everywhere. Biometric watches run them to predict when we should wake up. They're better than doctors at predicting rare genetic disorders. They've identified paintings that have influenced artists throughout history.

Scientists find machine learning algorithms so universally useful because they can identify non-linear patterns—basically every pattern that can't be represented by straight lines—which is tough to do. But it's important, because lots of patterns are non-linear.

We've used machine learning algorithms to forecast solar flares using SDO's vast data set. To do this, we first built a database of all the active regions SDO has ever observed. Since it's historical data, we already know if these active regions flared or not. The learning algorithm then analyzes active region features—such as the size of a sunspot, the strength of its associated magnetic field and the twistedness of these field lines—to identify general characteristics of flaring active regions.

To do this, the algorithm starts by making a guess. Let's say its first guess is that a tiny sunspot with a weak magnetic field will produce a huge flare. Then it checks the answer. Whoops, nope.

The algorithm then tweaks the way that it guesses. The next time around, it'll make a different guess. Through trial and error—in the form of hundreds of thousands of guesses and checks—the algorithm figures out which features correspond to flaring active regions. Now, we have a self-taught algorithm that we can apply to real-time data.

Expanding such efforts could help us provide better notice of impending solar flares. So far, studies have found that machine-learning algorithms forecast flares better than or, at the worst, just as well as the numerical or statistical methods. This is kind of a phenomenal result in and of itself.

These algorithms, which run without any human input whatsoever by simply looking for patterns in the data, and which are so general that you can use the same algorithm (on a different data set) to identify genetic disorders, can perform just as well as any other method used thus far to forecast solar flares.

And if we have more data? Who knows. Although we already have tons of data—SDO has been running for four and a half years—there haven't been a ton of flares during that time. That's because we're in the quietest solar cycle of the century. That's more reason to continue collecting data and keep the algorithms busy.

SEE ALSO: Neil DeGrasse Tyson: Here's What Everyone Gets Wrong About Solar Flares

Join the conversation about this story »

Why Is Space Black?

0
0

Since there are stars and galaxies in all directions, why is space black? Shouldn’t there be a star in every direction we look?

Imagine you’re in space. Just the floating part, not the peeing into a vacuum hose or eating that funky “ice cream” from foil bags part. If you looked at the Sun, it would be bright and your retinas would crisp up. The rest of the sky would be a soothing black, decorated with tiny little less burny points of light.

If you’ve done your homework, you know that space is huge. It even be infinite, which is much bigger than huge. If it is infinite you can imagine looking out into space in any direction and there being a star. Stars would litter everything. Dumb stars everywhere wrecking the view. It’s stars all the way down, people.

Hubble_Extreme_Deep_Field_(full_resolution)So, shouldn’t the entire sky be as bright as a star, since there’s a star in every possible minute direction you could ever look in? If you’ve ever asked yourself this question, you probably won’t be surprised to know you’re not the first. Also, at this point you can tell people you were wondering about it and they’ll never know you just watched it here and then you can sound wicked smart and impress all those dudes.

This question was famously asked by the German astronomer Heinrich Wilhelm Olbers who described it in 1823. We now call this Olbers’ Paradox after him. Here let me give you a little coaching, you’ll start your conversation at the party with “So, the other day, I was contemplating Olbers’ Paradox… Oh what’s that? You don’t know what it is… oh that’s so sweet!”. The paradox goes like this: if the Universe is infinite, static and has existed forever, then everywhere you look should eventually hit a star.

big bangOur experiences tell us this isn’t the case. So by proposing this paradox, Olbers knew the Universe couldn’t be infinite, static and timeless. It could be a couple of these, but not all three. In the 1920s, debonair man about town, Edwin Hubble discovered that the Universe isn’t static. In fact, galaxies are speeding away from us in all directions like we have the cooties.

This led to the theory of the Big Bang, that the Universe was once gathered into a single point in time and space, and then, expanded rapidly. Our Universe has proven to not be static or timeless. And so, PARADOX SOLVED!

Here’s the short version. We don’t see stars in every direction because many of the stars haven’t been around long enough for their light to get to us. Which I hope tickles your brain in the way it does mine. Not only do we have this incomprehensibly massive size of our Universe, but the scale of time we’re talking about when we do these thought experiments is absolutely boggling. So, PARADOX SOLVED!

Well, not exactly. Shortly after the Big Bang, the entire Universe was hot and dense, like the core of a star. A few hundred thousand years after the Big Bang, when the first light was able to leap out into space, everything, in every direction was as bright as the surface of a star.

planck cosmic microwave background enhancedSo, in all directions, we should still be seeing the brightness of a star.. and yet we don’t. As the Universe expanded, the wavelengths of that initial visible light were stretched out and out and dragged to the wide end of the electromagnetic spectrum until they became microwaves. This is Cosmic Microwave Background Radiation, and you guessed it, we can detect it in every direction we can look in.

So Olbers’ instinct was right. If you look in every direction, you’re seeing a spot as bright as a star, it’s just that the expansion of the Universe stretched out the wavelengths so that the light is invisible to our eyes. But if you could see the Universe with microwave detecting eyes, you’d see this: brightness in every direction.

Did you come up with Olbers’ Paradox too? What other paradoxes have puzzled you?

SEE ALSO: What Happens When You Enter A Black Hole?

LEARN MORE: Check out the BI Answers series.

Join the conversation about this story »

A Physicist Has Revolutionized The Study Of Evolution

0
0

michael laessig

Michael Lässig can be certain that if he steps out of his home in Cologne, Germany, on the night of Jan. 19, 2030 — assuming he’s still alive and the sky is clear — he will see a full moon.

Lässig’s confidence doesn’t come from psychic messages he’s receiving from the future. He knows the moon will be full because physics tells him so.

“The whole of physics is about prediction, and we’ve gotten quite good at it,” said Lässig, a physicist at the University of Cologne. “When we know where the moon is today, we can tell where the moon is tomorrow. We can even tell where it will be in a thousand years.”

Early in his career, Lässig made predictions about quantum particles, but in the 1990s, he turned to biology, exploring how genes evolved. In his research, Lässig was looking back in time, reconstructing evolutionary history. Looking ahead to evolution’s future was not something that biologists bothered doing. It might be possible to predict the motion of the moon, but biology was so complex that trying to predict its evolution seemed a fool’s errand.

But lately, evolution is starting to look surprisingly predictable. Lässig believes that soon it may even be possible to make evolutionary forecasts. Scientists may not be able to predict what life will be like 100 million years from now, but they may be able to make short-term forecasts for the next few months or years. And if they’re making predictions about viruses or other health threats, they might be able to save some lives in the process.

“As we collect a few examples of predictability, it changes the whole goal of evolutionary biology,” Lässig said.

Replaying the Tape of Life

If you want to understand why evolutionary biologist have been so loathe to make predictions, read “Wonderful Life,” a 1989 book by the late paleontologist Stephen Jay Gould.

The book is ostensibly about the Cambrian explosion, a flurry of evolutionary innovation that took place more than 500 million years ago. The oldest known fossils of many of today’s major animal groups date to that time. Our own lineage, the vertebrates, first made an appearance in the Cambrian explosion, for example.

But Gould had a deeper question in mind as he wrote his book. If you knew everything about life on Earth half a billion years ago, could you predict that humans would eventually evolve?

Gould thought not. He even doubted that scientists could safely predict that any vertebrates would still be on the planet today. How could they, he argued, when life is constantly buffeted by random evolutionary gusts? Natural selection depends on unpredictable mutations, and once a species emerges, its fate can be influenced by all sorts of forces, from viral outbreaks to continental drift, volcanic eruptions and asteroid impacts. Our continued existence, Gould wrote, is the result of a thousand happy accidents.

To illustrate his argument, Gould had his readers imagine an experiment he called “replaying life’s tape.” “You press the rewind button and, making sure you thoroughly erase everything that actually happened, go back to any time and place in the past,” he wrote. “Then let the tape run again and see if the repetition looks at all like the original.” Gould wagered that it wouldn’t.

Although Gould only offered it as a thought experiment, the notion of replaying the tape of life has endured. That’s because nature sometimes runs experiments that capture the spirit of his proposal.

Predictable Lizards

lizards evolutionFor an experiment to be predictable, it has to be repeatable. If the initial conditions are the same, the final conditions should also be the same. For example, a marble placed at the edge of a bowl and released will end up at the bottom of the bowl no matter how many times the action is repeated.

Biologists have found cases in which evolution has, in effect, run the same experiment several times over. And in some cases the results of those natural experiments have turned out very similar each time. In other words, evolution has been predictable.

One of the most striking cases of repeated evolution has occurred in the Caribbean. The islands there are home to a vast number of native species of anole lizards, which come in a staggering variety. The lizards live in the treetops, on forest floors and in open grassland. They come in a riot of colors and shapes. Some are blue, some are green and some are gray. Some are huge and bold while others are small and shy.

losos_croppedTo understand how this diversity evolved, Jonathan Losos of Harvard University and his students gathered DNA from the animals. After they compared the genetic material from different species, the scientists drew an evolutionary tree, with a branch for every lizard species.

When immigrant lizards arrived on a new island, Losos found, their descendants could evolve into new species. It was as if the lizard tape of life was rewound to the same moment and then played again.

If Gould were right, the pattern of evolution on each island would look nothing like the pattern on the other islands. If evolution were more predictable, however, the lizards would tend to repeat the same patterns.

Losos and his students have found that evolution did sometimes veer off in odd directions. On Cuba, for example, a species of lizard adapted to spending a lot of time in the water. It dives for fish and can even sprint across the surface of a stream. You won’t find a fishing lizard on any other Caribbean island.

For the most part, though, lizard evolution followed predictable patterns. Each time lizards colonized an island, they evolved into many of the same forms. On each island, some lizards adapted to living high in trees, evolving pads on their feet for gripping surfaces, along with long legs and a stocky body. Other lizards adapted to life among the thin branches lower down on the trees, evolving short legs that help them hug their narrow perches. Still other lizards adapted to living in grass and shrubs, evolving long tails and slender trunks. On island after island, the same kinds of lizards have evolved.

“I think the tide is running against Gould,” Losos said. Other researchers are also finding cases in which evolution is repeating itself. When cichlid fish colonize lakes in Africa, for example, they diversify into the same range of forms again and again.

“But the question is: What’s the overall picture?” Losos asked. “Are we cherry-picking the examples that work against him, or are we going to find that most of life is deterministic? No one is going to say Gould is completely wrong. But they’re not going to say he’s completely right either.”

Evolution in a Test Tube

Natural experiments can be revealing, but artificial experiments can be precise. Scientists can put organisms in exactly the same conditions and then watch evolution unfold. Microbes work best for this kind of research because scientists can rear billions of them in a single flask and the microbes can go through several generations in a single day. The most spectacular of these experiments has been going on for 26 years — and more than 60,000 generations — in the lab of Richard Lenski at Michigan State University.

Lenski launched the experiment with a single E. coli microbe. He let it divide into a dozen genetically identical clones that he then placed in a dozen separate flasks. Each flask contained a medium — a cocktail of chemicals mixed into water — that Lenski created especially for the experiment. Among other ingredients, it contained glucose for the bacteria to feed on. But it was a meager supply, which ran out after just a few hours. The bacteria then had to eke out their existence until the next morning, when Lenski or his students transferred a little of the microbe-laced fluid into a fresh flask. With a new supply of glucose, they could grow for a few more hours. Lenski and his students at Michigan State have been repeating this chore every day since.

At the outset, Lenski wasn’t sure what would happen, but he had his suspicions. He expected mutations to arise randomly in each line of bacteria. Some would help the microbes reproduce faster while others would be neutral or even harmful. “I imagined they’d be running off in one direction or another,” Lenski said.

In other words, Lenski thought the tape of life would replay differently with each rewind. But that’s not what happened. What Lenski witnessed was strikingly similar to the evolution that Jonathan Losos has documented in the Caribbean.

Lenski and his students have witnessed evolutionary oddities arise in their experiment — microbial versions of the Cuban fishing lizards, if you will. In 2003, Lenski’s team noticed that one line of bacteria had abruptly switched from feeding on glucose to feeding on a compound called citrate. The medium contains citrate to keep iron in a form that the bacteria can absorb. Normally, however, the bacteria don’t feed on the citrate itself. In fact, the inability to feed on citrate in the presence of oxygen is one of the defining features of E. coli as a species.

But Lenski has also observed evolution repeat itself many times over in his experiment. All 12 lines have evolved to grow faster on their meager diet of glucose. That improvement has continued to this day in the 11 lines that didn’t shift to citrate. Their doubling time — the time it takes for them to double their population — has sped up 70 percent. And when Lenski and his students have pinpointed the genes that have mutated to produce this improvement, they are often the same from one line to the next.

“That’s not at all what I expected when I started the experiment,” Lenski said. “I evidently was wrong-headed.”

Getting Complex Without Getting Random

e_coliLenski’s results have inspired other scientists to set up more complex experiments. Michael Doebeli, a mathematical biologist at the University of British Columbia, wondered how E. coli would evolve if it had two kinds of food instead of just one.

In the mid-2000s, he ran an experiment in which he provided glucose — the sole staple of Lenski’s experiment — and another compound E. coli can grow on, known as acetate.

Doebeli chose the two compounds because he knew that E. coli treats them very differently. When given a choice between the two, it will devour all the glucose before switching on the molecular machinery for feeding on acetate. That’s because glucose is a better source of energy. Feeding on acetate, by contrast, E. coli can only grow slowly.

Something remarkable happened in Doebeli’s experiment — and it happened over and over again. The bacteria split into two kinds, each specialized for a different way of feeding. One population became better adapted to growing on glucose. These glucose-specialists fed on the sugar until it ran out and then slowly switched over to feeding on acetate. The other population became acetate-specialists; they evolved to switch over to feeding on acetate even before the glucose supply ran out and could grow fairly quickly on acetate.

When two different kinds of organisms are competing for the same food, it’s common for one to outcompete the other. But in Doebeli’s experiment, the two kinds of bacteria developed a stable coexistence. That’s because both strategies, while good, are not perfect. The glucose-specialists start out growing quickly, but once the glucose runs out, they slow down drastically. The acetate-specialists, on the other hand, don’t get as much benefit from the glucose. But they’re able to grow faster than their rivals once the glucose runs out.

Doebeli’s bacteria echoed the evolution of lizards in the Caribbean. Each time the lizards arrived on an island, they diversified into many of the same forms, each with its own set of adaptations. Doebeli’s bacteria diversified as well — and did so in flask after flask.

To get a deeper understanding of this predictable evolution, Doebeli and his postdoctoral researcher, Matthew Herron, sequenced the genomes of some of the bacteria from these experiments. In three separate populations they discovered that the bacteria had evolved in remarkable parallel. In every case, many of the same genes had mutated.

Although Doebeli’s experiments are more complex than Lenski’s, they’re still simple compared with what E. coli encounters in real life. E. coli is a resident of the gut, where it feeds on dozens of compounds, where it coexists with hundreds of other species, where it must survive changing levels of oxygen and pH, and where it must negotiate an uneasy truce with our immune system. Even if E. coli’s evolution might be predictable in a flask of glucose and acetate, it would be difficult to predict how the bacteria would evolve in the jungle of our digestive system.

And yet scientists have been surprised to find that bacteria evolve predictably inside a host. Isabel Gordo, a microbiologist at the Gulbenkian Institute of Science in Portugal, and her colleagues designed a clever experiment that enabled them to track bacteria inside a mouse. Mice were inoculated with a genetically identical population of E. coli clones. Once the bacteria arrived in the mice’s guts, they started to grow, reproduce and evolve. And some of the bacteria were carried out of the mouse’s body with its droppings. The scientists isolated the experimental E. coli from the droppings. By examining the bacteria’s DNA, the scientists could track their evolution from one day to the next.

The scientists found that it took only days for the bacteria to start evolving. Different lineages of E. coli picked up new mutations that made them reproduce faster than their ancestors. And again and again, they evolved many of the same traits. For example, the original E. coli couldn’t grow if it was exposed to a molecule called galactitol, which mammals make as they break down sugar. However, Gordo’s team found that as E. coli adapted to life inside a mouse, it always evolved the ability to withstand galactitol. The bacteria treated a living host like one of Lenski’s flasks — or an island in the Caribbean.

Evolution’s Butterfly Effect

Each new example of predictable evolution is striking. But, as Losos warned, we can’t be sure whether scientists have stumbled across a widespread pattern in nature. Certainly, testing more species will help. But Doebeli has taken a very different approach to the question: He’s using math to understand how predictable evolution is overall.

Doebeli’s work draws on pioneering ideas that geneticists like Sewall Wright developed in the early 1900s. Wright pictured evolution like a hilly landscape. Each point on the landscape represents a different combination of traits — the length of a lizard’s legs versus the width of its trunk, for example. A population of lizards might be located on a spot on the landscape that represents long legs and a narrow trunk. Another spot on the landscape would represent short legs and a narrow trunk. And in another direction, there’s a spot representing long legs and a thick trunk.

The precise combinations of traits in an organism will influence its success at reproducing. Wright used the elevation of a spot on the evolutionary landscape to record that success. An evolutionary landscape might have several peaks, each representing one of the best possible combinations. On such a landscape, natural selection always pushes populations up hills. Eventually, a population may reach the top of a hill; at that point, any change will lead to fewer offspring. In theory, the population should stay put.

The future of evolution might seem easy to predict on such a landscape. Scientists could simply look at the slope of the evolutionary landscape and draw a line up the nearest hill.

“This view is just simply wrong,” said Doebeli.

That’s because the population’s evolution changes the landscape. If a population of bacteria evolves to feed on a new kind of food, for example, then the competition for that food becomes fierce. The benefit of specializing on that food goes down, and the peak collapses. “It’s actually the worst place to be,” Doebeli said.

To keep climbing uphill, the population has to veer onto a new course, toward a different peak. But as it travels in a new direction, it alters the landscape yet again.

Recently, Doebeli and Iaroslav Ispolatov, a mathematician at the University of Santiago in Chile, developed a model to understand how evolution works under these more complicated conditions. Their analysis suggests that evolution is a lot like the weather — in other words, it’s difficult to predict.

In the early 1960s, a scientist at the Massachusetts Institute of Technology namedEdward Lorenz developed one of the first mathematical models of weather. He hoped that they would reveal repeatable patterns that would help meteorologists predict the weather more accurately.

But Lorenz discovered just the opposite. Even a tiny change to the initial conditions of the model led, in time, to drastically different kinds of weather. In other words, Lorenz had to understand the model’s initial conditions with perfect accuracy to make long-term predictions about how it would change. Even a slight error would ruin the forecast.

Mathematicians later dubbed this sensitivity chaos. They would find that many systems — even surprisingly simple ones — behave chaotically. One essential ingredient for chaos is feedback — the ability for one part of the system to influence another, and vice versa.  Feedback amplifies even tiny differences into big ones. When Lorenz presented his results, he joked that the flap of a butterfly’s wings in Brazil could set off a tornado in Texas.

Evolution has feedbacks, too. A population evolves to climb the evolutionary landscape, but its changes alter the landscape itself. To see how these feedbacks affected evolution, Doebeli and Ispolatov created their own mathematical models.  They would drop populations onto the evolutionary landscape at almost precisely the same spot. And then they followed the populations as they evolved.

In some trials, the scientists only tracked the evolution of a few traits, while in others, they tracked many. They found that in the simple models, the populations tended to follow the same path, even though they started out in slightly different places. In other words, their evolution was fairly easy to predict.

But when the scientists tracked the evolution of many traits at once, that predictability disappeared. Despite starting out under almost identical conditions, the populations veered off on different evolutionary paths. In other words, evolution turned to chaos.

Doebeli and Isplolatov’s research suggests that for the most part, evolution is too chaotic to be predicted with any great accuracy. If they are right, then the successes that scientists like Losos and Lenski have had in finding predictable evolution are the exceptions that prove the rule. The future of evolution, for the most part, is as fundamentally unknowable as the future of the weather.

This conclusion may seem strange coming from Doebeli. After all, he has conducted experiments on E. coli that have shown just how predictable evolution can be. But he sees no contradiction. “It’s just a matter of time scales,” he said. “Over short periods of time, it is predictable, if you have enough information. But you can’t predict it over long periods of time.”

Darwin’s Prophets

fluEven over short periods of time, accurate forecasts can save lives. Meteorologists can make fairly reliable predictions about treacherous weather a few days in advance. That can be enough time to evacuate a town ahead of a hurricane or lay in supplies for a blizzard.

Richard Lenski thinks that recent studies raise the question of whether evolutionary forecasting could also provide practical benefits. “I think the answer is definitely yes,” he said.

One of the most compelling examples comes from Lässig. Using his physics background, he is working on a way to forecast the flu.

Worldwide, the flu kills as many as 500,000 people a year. Outside of the tropics, infections cycle annually from a high in winter to a low in summer. Flu vaccines can offer some protection, but the rapid evolution of the influenza virus makes it a moving target for vaccination efforts.

The influenza virus reproduces by invading the cells in our airway and using their molecular machinery to make new viruses. It’s a sloppy process, which produces many new mutants. Some of their mutations are harmful, crippling the viruses so that they can’t reproduce. But other mutations are harmless. And still others will make new flu viruses even better at making copies of themselves.

As the flu virus evolves, it diverges into many different strains. A vaccine that is effective against one strain will offer less protection against others. So vaccine manufacturers try to provide the best defense each flu season by combining the three or four most common strains of the flu.

There’s a problem with this practice, however. Manufacturing a new season’s flu vaccines takes several months. In the United States and other countries in the Northern Hemisphere, vaccine manufacturers must decide in February which strains to use for the flu season that starts in October. They often make the right prediction. But sometimes a strain that’s not covered by the vaccine unexpectedly comes to dominate a flu season. “If something goes wrong, it can cost thousands of lives,” Lässig said.

A few years ago, Lässig started to study the vexing evolution of the flu. He focused his attention on the rapidly evolving proteins that stud the shell of the flu virus, called hemagglutinin. Hemagglutinin latches on to receptors on our cells and opens up a passageway for the virus to invade.

When we get sick with the flu, our immune system responds by building antibodies that grab onto the tip of the hemagglutinin protein. The antibodies prevent the viruses from invading our cells and also make it easier for immune cells to detect the viruses and kill them. When we get flu vaccines, they spur our immune system to make those antibodies even before we get sick so that we’re ready to wipe out an infection as soon as it starts.

Scientists have been sequencing hemagglutinin genes from flu seasons for more than 40 years. Poring over this trove of information, Lässig was able to track the evolution of the viruses. He found that most mutations that altered the tip of the hemagglutinin protein helped the viruses reproduce more, probably because they made it difficult for antibodies to grab onto them. Escaping the immune system, they can make more copies of themselves.

Lassig_MGSE_Syposium_dwebEach strain of the flu has its own collection of beneficial mutations. But Lässig noticed that the viruses also carry harmful mutations in their hemagglutinin gene. Those harmful mutations make hemagglutinin less stable and thus less able to open up cells for invasion.

It occurred to Lässig that these mutations might determine which strains would thrive in the near future. Perhaps a virus with more beneficial mutations would be more likely to escape people’s immune systems. And if they escaped destruction, they would make more copies of themselves. Likewise, Lässig theorized, the more harmful mutations a virus had, the more it would struggle to invade cells.

If that were true, then it might be possible to predict which strains would become more or less common based on how many beneficial and harmful mutations they carried. Working with Columbia University biologist Marta Łuksza, he came up with a way to score the evolutionary potential of each strain of the flu. For each beneficial mutation, a strain earned a point. For each harmful one, Lässig and Łuksza took a point away.

The scientists examined thousands of strains of the flu that have been sampled since 1993. They would calculate the score for every strain in a given year and then use that score to predict how it would fare the following year. They correctly forecast whether a strain would grow or decline about 90 percent of the time. “It’s a simple procedure,” Lässig said. “But it works reasonably well.”

Lässig and his colleagues are now exploring ways to improve their forecast. Lässig hopes to be able to make predictions about future flu seasons that the World Health Organization could consult as they decide which strains should be included in flu vaccines. “It’s just a question of a few years,” he said.

The flu isn’t the only disease that evolutionary forecasting could help combat. Bacteria are rapidly evolving resistance to antibiotics. If scientists can predict the path that the microbes will take, they may be able to come up with strategies for putting up roadblocks.

Forecasting could also be useful in fighting cancer. When cells turn cancerous, theyundergo an evolution of their own. As cancer cells divide, they sometimes gain mutations that let them grow faster or escape the immune system’s notice. It may be possible to forecast how tumors will evolve and then plan treatments accordingly.

Beyond its practical value, Lässig sees a profound importance to being able to predict evolution. It will bring the science of evolutionary biology closer to other fields like physics and chemistry. Lässig doesn’t think that he’ll be able to forecast evolution as easily as he can the motion of the moon, but he hopes that there’s much about evolution that will prove to be predictable. “There’s going to be a boundary, but we don’t know where the boundary is,” he said.

SEE ALSO: This Physicist Has A Groundbreaking Idea About Why Life Exists

Join the conversation about this story »

Researchers Are Printing Out Shape-Shifting '4D' Structures

0
0

physics

Using a new technique known as 4D printing, researchers can print out dynamic 3D structures capable of changing their shapes over time.

Such 4D-printed items could one day be used in everything from medical implants to home appliances, scientists added.

Today's 3D printing creates items from a wide variety of materials — plastic, ceramic, glass, metal, and even more unusual ingredients such as chocolate and living cells.

The machines work by setting down layers of material just like ordinary printers lay down ink, except 3D printers can also deposit flat layers on top of each other to build 3D objects.

"Today, this technology can be found not just in industry, but [also] in households for less than $1,000," said lead study author Dan Raviv, a mathematician at MIT. "Knowing you can print almost anything, not just 2D paper, opens a window to unlimited opportunities, where toys, household appliances and tools can be ordered online and manufactured in our living rooms."

Now, in a further step, Raviv and his colleagues are developing 4D printing, which involves 3D printing items that are designed to change shape after they are printed. [The 10 Weirdest Things Created By 3D Printing]

"The most exciting part is the numerous applications that can emerge from this work," Raviv told Live Science. "This is not just a cool project or an interesting solution, but something that can change the lives of many."

4d

In a report published online Dec. 18 in the journal Scientific Reports, the researchers explain how they printed 3D structures using two materials with different properties. One material was a stiff plastic, and stayed rigid, while the other was water absorbent, and could double in volume when submerged in water. The precise formula of this water-absorbent material, developed by 3D-printing company Stratasys in Eden Prairie, Minnesota, remains a secret.

The researchers printed up a square grid, measuring about 15 inches (38 centimeters) on each side. When they placed the grid in water, they found that the water-absorbent material could act like joints that stretch and fold, producing a broad range of shapes with complex geometries. For example, the researchers created a 3D-printed shape that resembled the initials "MIT" that could transform into another shape resembling the initials "SAL."

Dan Raviv / Scientific Reports4d2

"In the future, we imagine a wide range of applications," Raviv said. These could include appliances that can adapt to heat and improve functionality or comfort, childcare products that can react to humidity or temperature, and clothing and footwear that will perform better by sensing the environment, he said.

In addition, 4D-printed objects could lead to novel medical implants. "Today, researchers are printing biocompatible parts to be implanted in our body," Raviv said. "We can now generate structures that will change shape and functionality without external intervention."

4d

One key health-care application might be cardiac stents, tubes placed inside the heart to aid healing. "We want to print parts that can survive a lifetime inside the body if necessary," Raviv said.

The researchers now want to create both larger and smaller 4D-printed objects. "Currently, we've made items a few centimeters in size," Raviv said. "For things that go inside the body, we want to go 10 to 100 times smaller. For home appliances, we want to go 10 times larger."

Raviv cautioned that a great deal of research is needed to improve the materials used in 4D printing. For instance, although the 4D-printed objects the researchers developed can withstand a few cycles of wetting and drying, after several dozen cycles of folding and unfolding, the materials lose their ability to change shape. The scientists said they would also like to develop materials that respond to factors other than water, such as heat and light.

Follow us @livescience, Facebook& Google+. Original article on Live Science.

Copyright 2014 LiveScience, a TechMediaNetwork company. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.

SEE ALSO: These Are 3 Breakthroughs That Bill Nye Thinks Will Change The World

CHECK OUT: This Is A Memory Champion's One Trick To Remember Everything

Join the conversation about this story »

A Ton Of Party Tricks To Fool And Impress Your Friends And Family

0
0

Getting bored sitting around your family's house? Sick of hanging out with friends from high school? Here's a bunch of videos of awesome party tricks to keep yourself entertained, from Richard Wiseman's Quirky Mind Stuff blog.

10 amazing science stunts for parties:

10 bets you will always win:

10 more bets you will always win:

Even more bets you will always win:

Amazing practical jokes:

CHECK OUT: How To Figure Out What To Get Your Significant Other

SEE ALSO: The Most Jaw-Dropping Science Pictures Of 2014

Join the conversation about this story »


There Are Parts Of The Universe We Will Never Be Able To See

0
0

largest structure in universe Thanks to our powerful telescopes, there are so many places in the Universe we can see. But there are places hidden from us, and places that we’ll never be able to see.

We're really lucky to live in our Universe with our particular laws of physics. At least, that's what we keep telling ourselves. The laws of physics can be cruel and unforgiving, and should you try and cross them, they will crush you like a bug.

Here at Universe Today, we embrace our Physics overlords and prefer to focus on the positive, the fact that light travels at the speed of light is really helpful. This allows us to look backwards in time as we look further out. Billions of light-years away, we can see what the Universe looked like billions of years ago. Physics is good. Physics knows what's best. Thanks physics. And where the hand of physics gives, it can also take away.

There are some parts of the Universe that we'll never, ever be able to see. No matter what we do. They'll always remain just out of reach. No matter how much we plead, in some sort of Kafka-esque nightmare, these rules do not appear to have conscience or room for appeal.

As we look outward in the cosmos, we look backwards in time and at the very edge of our vision is the Cosmic Microwave Background Radiation. The point after the Big Bang where everything had cooled down enough so it was no longer opaque. Light could finally escape and travel through a transparent Universe. This happened about 300,000 years after the Big Bang. What happened before that is a mystery. We can calculate what the Universe was like, but we can't actually look at it. Possibly, we just don't have the right clearance levels.

On the other end of the timeline, in the distant distant future. Assuming humans, or our Terry Gilliam inspired robot bodies are still around to observe the Universe, there will be a lot less to see. Distance is also out to rain on our sightseeing safari. The expansion of the Universe is accelerating, and galaxies are speeding away from each other faster and faster. Eventually, they'll be moving away from us faster than the speed of light.

Universe_1When that happens, we'll see the last few photons from those distant galaxies, redshifted into oblivion. And then, we won't see any galaxies at all. Their light will never reach us and our skies will be eerily empty. Just don't let physics hear a sad tone in your voice, we don't want to spend another night in the "joy re-education camps"

Currently, we can see a sphere of the Universe that measures 92 billion light-years across. Outside that sphere is more Universe, a hidden, censored Universe. Universe that we can't see because the light hasn't reached us yet. Fortunately, every year that goes by, a little less Universe is redacted from the record, and the sphere we can observe gets bigger by one light-year. We can see a little more in all directions.

Finally, let's consider what's inside the event horizon of a black hole. A place that you can't look at, because the gravity is so strong that light itself can never escape it. So by definition, you can't see what absorbs all its own light. Astronomers don't know if black holes crunch down to a physical sphere and stop shrinking, or continue shrinking forever, getting smaller and smaller into infinity. Clearly, we can't look there because we shouldn't be looking there. They're terrible places. The possibility of shrinking forever gives me the heebies.

Black HoleAnd so, good news! The chocolate ration has been increased from 40 grams to 25 grams, and our physics overlords are good, can only do good, and always know what's best for us. In fact, so good that gravity might actually provide us with a tool to "see" these hidden places, but only because "they" want us to.

When black holes form, or massive objects smash into each other, or there are "Big Bangs", these generate distortions in spacetime called gravitational waves. Like gravity itself, these propagate across the Universe and could be detected.It's possible we could use gravitational waves to "see" beyond the event horizon of a black hole, or past the Cosmic Microwave Background Radiation.

The problem is that gravitational waves are so faint, we haven't even detected a single one yet. But that's probably just a technology problem. In the end, we need a more sensitive observatory. We'll get there. Alternately we could apply to the laws of physics board of appeals and fill in one of their 2500 page application forms in triplicate and see if we can be granted a rules exception, and maybe just get a tiny little peek behind that veil.

We live an amazing Universe, most of which we'll never be able to see. But that's okay, there's enough we can see to keep us busy until infinity. What law of physics would you like to be granted a special exception to ignore. Tell us in the comments below.

NOW READ: Why Is Space Black?

IN DEPTH: These Stunning Hubble Images Show Us The Secrets Of The Universe

Join the conversation about this story »

Scientists Made An Amazing Video That Shows The Short But Beautiful Life Of A Match On Fire

0
0
This award-winning, high-speed video shows the epic life story of a match once lit. The scientists who made the video used a special technique, called Schlieren imaging, to highlight the flame's thick gas churning and mixing with the air around it in a beautiful black-and-white display.
The video won the Milton van Dyke Award at this year's American Physical Society Division of Fluid Dynamics.

Video courtesy of Victor Miller.

Join the conversation about this story »

The search for supersymmetry: Come out, come out, wherever you are!

0
0

Large Hadron ColliderIn March, after a two-year shut down for an upgrade, the world's biggest particle accelerator, the Large Hadron Collider (LHC), will reopen for business.

The rest of the year will see physicists biting their nails — for one way or another 2015 will go down as a famous date in their field.

Either theoreticians will be proved spectacularly right, and experimenters can move confidently on into the verdant pastures of so-called new physics, engaging in a positive safari of hunting for novel particles, or they will find out, to exaggerate only slightly, that they do not understand how the universe really works.

The LHC's main job, now it has found the much-heralded Higgs boson, is to track down an almost equally heralded--and more than equally elusive--phenomenon called Susy. This is the nickname physicists have given to the concept of supersymmetry, which lies at the heart of most models of new physics.

Susy, dreamed up in 1981 to answer tough questions about existing physical models, has been playing hide and seek since then as first the Americans, using the now-closed Tevatron accelerator at Fermilab, near Chicago, and then the Europeans, using the LHC at CERN, a laboratory in Geneva, have sought signs of her existence. Researchers have gradually ramped up the power of their machines, looking for telltale particles, and have now arrived at the point where, if some of these particles do not appear in the latest ramp-up, they will have to scrap the idea and come up with something else.

Susy exists to resolve a conundrum. In the second half of the 20th century physicists painstakingly assembled what has come to be called the Standard Model. This explains all known fundamental particles and forces except for gravity, which has its own private model called general relativity. But, though the Standard Model works, it depends on many arbitrary mathematical assumptions. The conundrum is why these assumptions have the values they do. But the need for a lot of those assumptions would disappear if the known particles had heavier partner particles: their supersymmetric twins.

There are various versions of supersymmetry, but all of the most plausible predict that some of these partner particles, though heavier than the particles of the Standard Model, and thus harder to make in accelerators, are nevertheless sufficiently light that either they should have been found already, or else they should show up pretty quickly when the LHC is turned back on. The machine's upgrade is therefore the last throw of the dice for the theory, at least in its conventional form.

Failing to find supersymmetry would be tricky not only for those who hope to use it to clarify the Standard Model, but also for those others who think Susy will explain the nature of so-called dark matter--which its gravitational effects show is six times as abundant in the universe as the familiar matter of which atoms are made. Many physicists are betting that dark matter is composed of one or more types of supersymmetric partner particles. If those particles turn out to be illusory, these physicists, too, will have to think again.

CERN_LHC_Tunnel1

If Susy does not show up, though, it will not be the end of physics. In science, not finding something can often be more exciting than finding it. In the late 19th century, for example, there was a Susy-like hunt for the luminiferous aether, which almost all physicists then believed pervaded space and propagated light in the way that air propagates sound. But an experiment by two Americans, Albert Michelson and Edward Morley, showed that the aether does not exist. Physics had been barking up the wrong tree, and it took Max Planck and Albert Einstein, the conceivers of quantum theory and relativity theory, to give it new trees to bark up.

A century later, a few alternative trees have already been planted, just in case Susy does fail to show up for her date. There are, for instance, more complicated versions of supersymmetry that have the virtue, from the point of view of the current absence of telltale particles, that their own predicted particles are too heavy for even the upgraded LHC to make.

The vice of these theories is that they are indeed more complicated. Invoking them smacks of an ancient astronomer adding an epicycle to a planet's orbit to make that planet's movement fit the data, when what is actually needed is a shift of perspective about where the centre of the solar system is.

Another approach, which has the virtue of requiring such a shift of perspective, is to accept that the Standard Model's arbitrary assumptions are actually arbitrary realities. Physicists are reluctant to do this because even small changes in the numbers would cause the whole thing to break down. The result would be either a radically different universe or no universe at all. It beggars belief, the argument goes, that things could be so finely tuned as to produce this particular universe, the one humans live in, by accident.

The way out of this, for those unwilling to invoke an intelligent creator, is to allow that the observable universe is just one of an indefinite number of universes, each with its own laws of physics. In that case, only universes governed by the Standard Model, or something similar to it, could have the conditions needed for the emergence of physicists capable of observing it.

Such arguments shade into philosophy, for even if multiple universes do exist it may be impossible to observe them. But then, in Isaac Newton's day, physics was known as "natural philosophy". Perhaps it is time to revive the term.

Click here to subscribe to The Economist

Join the conversation about this story »

11 Mind-Blowing Physics Discoveries Made In 2014

0
0

theoretical physics

With the help of highly sensitive particle detectors, some of the world's most powerful lasers, and good-old-fashioned quantum mechanics, physicists from around the world made important discoveries this year. 

From detecting elusive particles forged in the core of our sun to teleporting quantum data farther than ever before, these physicists' scientific research has helped us better understand the universe in which we live as well as pave the way for a future of quantum computers, nuclear fusion, and more. 

11. Multiple teams detected what could be our first hints of dark matter.

Although dark matter — the mysterious substance that makes up most of the matter in the universe, but is seemingly undetectable to us here on Earth — is still shrouded in mystery, two important discoveries in 2014 shed the first rays of light on this elusive material.

Dark matter makes up 26.8% of our universe, and to know so little about such a large portion of the cosmos is why these studies to better understand this elusive material are so important.

In September 2014, scientists published, in the journal Physical Review Letters, an unusual measurement from the space-based detector called the Alpha Magnetic Spectrometer (AMS). The detector measured an unexpected excess of positrons — the antiparticle to electrons — inside of high-energy radiation from space called cosmic rays. One explanation for this excess is the decay of dark matter. 

Then, a few months later, a team of scientists discovered another possible source of dark matter. Using the European Space Agency's XMM-Newton spacecraft and NASA's space-based Chandra X-ray Observatory, two groups of scientists measured a surprising spike in X-ray emissions that were coming from the Andromeda galaxy and the Perseus galaxy cluster. No known particle can explain this spike, leading the scientists to suspect more mysterious causes, one being dark matter, which they report in the journal Physical Review Letters.

Despite neither of these surprising measurements actually confirming the detection of dark matter, they are an important step in nailing down, once and for all, what our universe is made of. 



10. For the first time, physicists figured out the chemical composition of the mysterious and extremely rare phenomenon of "ball lightning."

Reports of ball lighting stretch back as far as the 16th century, but until the 1960s most scientists refused to believe it was real. But, it is real. Ball lighting is a floating sphere or disk of lightning up to 10 feet across that lasts only seconds.

This year, however, scientists in China not only added to the surmounting evidence supporting ball lightning’s existence, they also took the first spectrum of the rare phenomenon. A spectrum is the rainbow of individual wavelengths of light from a given source, and is used to figure out its chemical make up because different atoms give off different energies (and therefore colors) of light when excited. 

In the ball lightnings' spectra, the physicists saw minerals from soil, which supports the theory that ball lightning forms after a bolt of lighting strikes the ground. The lightning vaporizes the silicon in the soil, making a floating ball of silicon that interacts with oxygen in the air, making it glow.

The physicists announced their discovery last January in the journal Physical Review Letters



9. An analog of the theoretical radiation made by black holes was recreated in the lab.

Last October, Jeff Steinhauer, a physicist at the Technion-Israel Institute of Technology in Haifa, announced that he had created an analogue for a bizarre type of radiation that can, in theory, escape black holes.

Black holes are objects that have a strong gravitational pull, so once anything passes a certain point, called the event horizon, it is trapped and cannot escape, except for a special kind of radiation called Hawking radiation.

While it has never been observed in space, it was first theorized by Stephen Hawking in 1974. Hawking radiation is important to describe how particles of radiation near the event horizon of a black hole can move from inside of the black hole to outside of it— a behavior that is theoretically possible, according to quantum mechanics.

Steinhauer created a sonic black hole in the lab that traps sound instead of light. This is much easier because sound moves much slower than light.

In a paper published in October in the journal Nature Physics, he describes how he discovered sound waves hopping the "black hole's" event horizon.

This analogue to Hawking radiation could help solve a burning question for physicists who study black holes: If a piece of radiation is encoded with information, like the spin value of particles, and falls into a black hole, is that information lost forever? 



See the rest of the story at Business Insider

New Theory Suggests That We Live In The Past Of A Parallel Universe

0
0

parallel universe

Physicists have a problem with time.

Whether through Newton’s gravitation, Maxwell’s electrodynamics, Einstein’s special and general relativity or quantum mechanics, all the equations that best describe our universe work perfectly if time flows forward or backward.

Of course the world we experience is entirely different. The universe is expanding, not contracting. Stars emit light rather than absorb it, and radioactive atoms decay rather than reassemble. Omelets don’t transform back to unbroken eggs and cigarettes never coalesce from smoke and ashes. We remember the past, not the future, and we grow old and decrepit, not young and rejuvenated. For us, time has a clear and irreversible direction. It flies forward like a missile, equations be damned.

For more than a century, the standard explanation for “time’s arrow,” as the astrophysicist Arthur Eddington first called it in 1927, has been that it is an emergent property of thermodynamics, as first laid out in the work of the 19th-century Austrian physicist Ludwig Boltzmann. In this view what we perceive as the arrow of time is really just the inexorable rearrangement of highly ordered states into random, useless configurations, a product of the universal tendency for all things to settle toward equilibrium with one another.

Informally speaking, the crux of this idea is that “things fall apart,” but more formally, it is a consequence of the second law of thermodynamics, which Boltzmann helped devise. The law states that in any closed system (like the universe itself), entropy—disorder—can only increase. Increasing entropy is a cosmic certainty because there are always a great many more disordered states than orderly ones for any given system, similar to how there are many more ways to scatter papers across a desk than to stack them neatly in a single pile.

The thermodynamic arrow of time suggests our observable universe began in an exceptionally special state of high order and low entropy, like a pristine cosmic egg materializing at the beginning of time to be broken and scrambled for all eternity. From Boltzmann’s era onward, scientists allergic to the notion of such an immaculate conception have been grappling with this conundrum.

Boltzmann, believing the universe to be eternal in accordance with Newton’s laws, thought that eternity could explain a low-entropy origin for time’s arrow. Given enough time—endless time, in fact—anything that can happen will happen, including the emergence of a large region of very low entropy as a statistical fluctuation from an ageless, high-entropy universe in a state of near-equilibrium. Boltzmann mused that we might live in such an improbable region, with an arrow of time set by the region’s long, slow entropic slide back into equilibrium.

Today’s cosmologists have a tougher task, because the universe as we now know it isn’t ageless and unmoving: They have to explain the emergence of time’s arrow within a dynamic, relativistic universe that apparently began some 14 billion years ago in the fiery conflagration of the big bang. More often than not the explanation involves ‘fine-tuning’—the careful and arbitrary tweaking of a theory’s parameters to accord with observations.

inflationMany of the modern explanations for a low-entropy arrow of time involve a theory called inflation—the idea that a strange burst of antigravity ballooned the primordial universe to an astronomically larger size, smoothing it out into what corresponds to a very low-entropy state from which subsequent cosmic structures could emerge. But explaining inflation itself seems to require even more fine-tuning. One of the problems is that once begun, inflation tends to continue unstoppably. This “eternal inflation” would spawn infinitudes of baby universes about which predictions and observations are, at best, elusive. Whether this is an undesirable bug or a wonderful feature of the theory is a matter of fierce debate; for the time being it seems that inflation’s extreme flexibility and explanatory power are both its greatest strength and its greatest weakness.

For all these reasons, some scientists seeking a low-entropy origin for time’s arrow find explanations relying on inflation slightly unsatisfying. “There are many researchers now trying to show in some natural way why it’s reasonable to expect the initial entropy of the universe to be very low,” says David Albert, a philosopher and physicist at Columbia University. “There are even some who think that the entropy being low at the beginning of the universe should just be added as a new law of physics.”

That latter idea is tantamount to despairing cosmologists simply throwing in the towel. Fortunately, there may be another way.

Tentative new work from Julian Barbour of the University of Oxford, Tim Koslowski of the University of New Brunswick and Flavio Mercati of the Perimeter Institute for Theoretical Physics suggests that perhaps the arrow of time doesn’t really require a fine-tuned, low-entropy initial state at all but is instead the inevitable product of the fundamental laws of physics. Barbour and his colleagues argue that it is gravity, rather than thermodynamics, that draws the bowstring to let time’s arrow fly. Their findings were published in October in Physical Review Letters.

The team’s conclusions come from studying an exceedingly simple proxy for our universe, a computer simulation of 1,000 pointlike particles interacting under the influence of Newtonian gravity. They investigated the dynamic behavior of the system using a measure of its "complexity," which corresponds to the ratio of the distance between the system’s closest pair of particles and the distance between the most widely separated particle pair. The system’s complexity is at its lowest when all the particles come together in a densely packed cloud, a state of minimum size and maximum uniformity roughly analogous to the big bang. The team’s analysis showed that essentially every configuration of particles, regardless of their number and scale, would evolve into this low-complexity state. Thus, the sheer force of gravity sets the stage for the system’s expansion and the origin of time’s arrow, all without any delicate fine-tuning to first establish a low-entropy initial condition.

Big BangFrom that low-complexity state, the system of particles then expands outward in both temporal directions, creating two distinct, symmetric and opposite arrows of time. Along each of the two temporal paths, gravity then pulls the particles into larger, more ordered and complex structures—the model’s equivalent of galaxy clusters, stars and planetary systems. From there, the standard thermodynamic passage of time can manifest and unfold on each of the two divergent paths. In other words, the model has one past but two futures. As hinted by the time-indifferent laws of physics, time’s arrow may in a sense move in two directions, although any observer can only see and experience one. “It is the nature of gravity to pull the universe out of its primordial chaos and create structure, order and complexity,” Mercati says. “All the solutions break into two epochs, which go on forever in the two time directions, divided by this central state which has very characteristic properties.”

Although the model is crude, and does not incorporate either quantum mechanics or general relativity, its potential implications are vast. If it holds true for our actual universe, then the big bang could no longer be considered a cosmic beginning but rather only a phase in an effectively timeless and eternal universe. More prosaically, a two-branched arrow of time would lead to curious incongruities for observers on opposite sides. “This two-futures situation would exhibit a single, chaotic past in both directions, meaning that there would be essentially two universes, one on either side of this central state,” Barbour says. “If they were complicated enough, both sides could sustain observers who would perceive time going in opposite directions. Any intelligent beings there would define their arrow of time as moving away from this central state. They would think we now live in their deepest past.”

What’s more, Barbour says, if gravitation does prove to be fundamental to the arrow of time, this could sooner or later generate testable predictions and potentially lead to a less “ad hoc” explanation than inflation for the history and structure of our observable universe.

This is not the first rigorous two-futures solution for time’s arrow. Most notably, California Institute of Technology cosmologist Sean Carroll and a graduate student, Jennifer Chen, produced their own branching model in 2004, one that sought to explain the low-entropy origin of time’s arrow in the context of cosmic inflation and the creation of baby universes. They attribute the arrow of time’s emergence in their model not so much to entropy being very low in the past but rather to entropy being so much higher in both futures, increased by the inflation-driven creation of baby universes.

A decade on, Carroll is just as bullish about the prospect that increasing entropy alone is the source for time’s arrow, rather than other influences such as gravity. “Everything that happens in the universe to distinguish the past from the future is ultimately because the entropy is lower in one direction and higher in the other,” Carroll says. “This paper by Barbour, Koslowski and Mercati is good because they roll up their sleeves and do the calculations for their specific model of particles interacting via gravity, but I don’t think it’s the model that is interesting—it’s the model’s behavior being analyzed carefully…. I think basically any time you have a finite collection of particles in a really big space you’ll get this kind of generic behavior they describe. The real question is, is our universe like that? That’s the hard part.”

Together with Alan Guth, the Massachusetts Institute of Technology cosmologist who pioneered the theory of inflation, Carroll is now working on a thermodynamic response of sorts to the new claims for a gravitational arrow of time: Another exceedingly simple particle-based model universe that also naturally gives rise to time’s arrow, but without the addition of gravity or any other forces. The thermodynamic secret to the model’s success, they say, is assuming that the universe has an unlimited capacity for entropy.

“If we assume there is no maximum possible entropy for the universe, then any state can be a state of low entropy,” Guth says. “That may sound dumb, but I think it really works, and I also think it’s the secret of the Barbour et al construction. If there’s no limit to how big the entropy can get, then you can start anywhere, and from that starting point you’d expect entropy to rise as the system moves to explore larger and larger regions of phase space. Eternal inflation is a natural context in which to invoke this idea, since it looks like the maximum possible entropy is unlimited in an eternally inflating universe.”

The controversy over time’s arrow has come far since the 19th-century ideas of Boltzmann and the 20th-century notions of Eddington, but in many ways, Barbour says, the debate at its core remains appropriately timeless. “This is opening up a completely new way to think about a fundamental problem, the nature of the arrow of time and the origin of the second law of thermodynamics,” Barbour says. “But really we’re just investigating a new aspect of Newton’s gravitation, which hadn’t been noticed before. Who knows what might flow from this with further work and elaboration?”

“Arthur Eddington coined the term ‘arrow of time,’ and famously said the shuffling of material and energy is the only thing which nature cannot undo,” Barbour adds. “And here we are, showing beyond any doubt really that this is in fact exactly what gravity does. It takes systems that look extraordinarily disordered and makes them wonderfully ordered. And this is what has happened in our universe. We are realizing the ancient Greek dream of order out of chaos.”

NOW WATCH: Robot Funded By The US Military Can Sprint And Jump Just Like A Cheetah

SEE ALSO: This Physicist Has A Groundbreaking Idea About Why Life Exists

SEE ALSO: 11 Mind-Blowing Physics Discoveries Made In 2014

Join the conversation about this story »

Viewing all 778 articles
Browse latest View live




Latest Images