CAT RC Questions | CAT RC Based on Natural Science questions

READING COMPREHENSION Based on NATURAL SCIENCES — Passages based on Physical Sciences and Life Science, Natural Phenomenon, Astronomy etc. CAT Past Year VARC Questions

Comprehension

The passage below is accompanied by a set of questions. Choose the best answer to each question.

The biggest challenge [The Nutmeg’s Curse by Ghosh] throws down is to the prevailing understanding of when the climate crisis started. Most of us have accepted . . . that it started with the widespread use of coal at the beginning of the Industrial Age in the 18th century and worsened with the mass adoption of oil and natural gas in the 20th . Ghosh takes this history at least three centuries back, to the start of European colonialism in the 15th century. He [starts] the book with a 1621 massacre by Dutch invaders determined to impose a monopoly on nutmeg cultivation and trade in the Banda islands in today’s Indonesia. Not only do the Dutch systematically depopulate the islands through genocide, they also try their best to bring nutmeg cultivation into plantation mode. These are the two points to which Ghosh returns through examples from around the world. One, how European colonialists decimated not only indigenous populations but also indigenous understanding of the relationship between humans and Earth. Two, how this was an invasion not only of humans but of the Earth itself, and how this continues to the present day by looking at nature as a ‘resource’ to exploit. . . . We know we are facing more frequent and more severe heatwaves, storms, floods, droughts and wildfires due to climate change. We know our expansion through deforestation, dam building, canal cutting – in short, terraforming, the word Ghosh uses – has brought us repeated disasters . . . Are these the responses of an angry Gaia who has finally had enough? By using the word ‘curse’ in the title, the author makes it clear that he thinks so. I use the pronoun ‘who’ knowingly, because Ghosh has quoted many non-European sources to enquire into the relationship between humans and the world around them so that he can question the prevalent way of looking at Earth as an inert object to be exploited to the maximum. As Ghosh’s text, notes and bibliography show once more, none of this is new. There have always been challenges to the way European colonialists looked at other civilisations and at Earth. It is just that the invaders and their myriad backers in the fields of economics, politics, anthropology, philosophy, literature, technology, physics, chemistry, biology have dominated global intellectual discourse. . . . There are other points of view that we can hear today if we listen hard enough. Those observing global climate negotiations know about the Latin American way of looking at Earth as Pachamama (Earth Mother). They also know how such a framing is just provided lip service and is ignored in the substantive portions of the negotiations. In The Nutmeg’s Curse, Ghosh explains why. He shows the extent of the vested interest in the oil economy – not only for oil-exporting countries, but also for a superpower like the US that controls oil drilling, oil prices and oil movement around the world. Many of us know power utilities are sabotaging decentralised solar power generation today because it hits their revenues and control. And how the other points of view are so often drowned out.

CAT/2023.3(RC)

Question. 1

All of the following can be inferred from the reviewer’s discussion of “The Nutmeg’s Curse”, EXCEPT:

Comprehension

The passage below is accompanied by a set of questions. Choose the best answer to each question.

The biggest challenge [The Nutmeg’s Curse by Ghosh] throws down is to the prevailing understanding of when the climate crisis started. Most of us have accepted . . . that it started with the widespread use of coal at the beginning of the Industrial Age in the 18th century and worsened with the mass adoption of oil and natural gas in the 20th . Ghosh takes this history at least three centuries back, to the start of European colonialism in the 15th century. He [starts] the book with a 1621 massacre by Dutch invaders determined to impose a monopoly on nutmeg cultivation and trade in the Banda islands in today’s Indonesia. Not only do the Dutch systematically depopulate the islands through genocide, they also try their best to bring nutmeg cultivation into plantation mode. These are the two points to which Ghosh returns through examples from around the world. One, how European colonialists decimated not only indigenous populations but also indigenous understanding of the relationship between humans and Earth. Two, how this was an invasion not only of humans but of the Earth itself, and how this continues to the present day by looking at nature as a ‘resource’ to exploit. . . . We know we are facing more frequent and more severe heatwaves, storms, floods, droughts and wildfires due to climate change. We know our expansion through deforestation, dam building, canal cutting – in short, terraforming, the word Ghosh uses – has brought us repeated disasters . . . Are these the responses of an angry Gaia who has finally had enough? By using the word ‘curse’ in the title, the author makes it clear that he thinks so. I use the pronoun ‘who’ knowingly, because Ghosh has quoted many non-European sources to enquire into the relationship between humans and the world around them so that he can question the prevalent way of looking at Earth as an inert object to be exploited to the maximum. As Ghosh’s text, notes and bibliography show once more, none of this is new. There have always been challenges to the way European colonialists looked at other civilisations and at Earth. It is just that the invaders and their myriad backers in the fields of economics, politics, anthropology, philosophy, literature, technology, physics, chemistry, biology have dominated global intellectual discourse. . . . There are other points of view that we can hear today if we listen hard enough. Those observing global climate negotiations know about the Latin American way of looking at Earth as Pachamama (Earth Mother). They also know how such a framing is just provided lip service and is ignored in the substantive portions of the negotiations. In The Nutmeg’s Curse, Ghosh explains why. He shows the extent of the vested interest in the oil economy – not only for oil-exporting countries, but also for a superpower like the US that controls oil drilling, oil prices and oil movement around the world. Many of us know power utilities are sabotaging decentralised solar power generation today because it hits their revenues and control. And how the other points of view are so often drowned out.

CAT/2023.3(RC)

Question. 2

Which one of the following, if true, would make the reviewer’s choice of the pronoun “who” for Gaia inappropriate?

Comprehension

The passage below is accompanied by a set of questions. Choose the best answer to each question.

The biggest challenge [The Nutmeg’s Curse by Ghosh] throws down is to the prevailing understanding of when the climate crisis started. Most of us have accepted . . . that it started with the widespread use of coal at the beginning of the Industrial Age in the 18th century and worsened with the mass adoption of oil and natural gas in the 20th . Ghosh takes this history at least three centuries back, to the start of European colonialism in the 15th century. He [starts] the book with a 1621 massacre by Dutch invaders determined to impose a monopoly on nutmeg cultivation and trade in the Banda islands in today’s Indonesia. Not only do the Dutch systematically depopulate the islands through genocide, they also try their best to bring nutmeg cultivation into plantation mode. These are the two points to which Ghosh returns through examples from around the world. One, how European colonialists decimated not only indigenous populations but also indigenous understanding of the relationship between humans and Earth. Two, how this was an invasion not only of humans but of the Earth itself, and how this continues to the present day by looking at nature as a ‘resource’ to exploit. . . . We know we are facing more frequent and more severe heatwaves, storms, floods, droughts and wildfires due to climate change. We know our expansion through deforestation, dam building, canal cutting – in short, terraforming, the word Ghosh uses – has brought us repeated disasters . . . Are these the responses of an angry Gaia who has finally had enough? By using the word ‘curse’ in the title, the author makes it clear that he thinks so. I use the pronoun ‘who’ knowingly, because Ghosh has quoted many non-European sources to enquire into the relationship between humans and the world around them so that he can question the prevalent way of looking at Earth as an inert object to be exploited to the maximum. As Ghosh’s text, notes and bibliography show once more, none of this is new. There have always been challenges to the way European colonialists looked at other civilisations and at Earth. It is just that the invaders and their myriad backers in the fields of economics, politics, anthropology, philosophy, literature, technology, physics, chemistry, biology have dominated global intellectual discourse. . . . There are other points of view that we can hear today if we listen hard enough. Those observing global climate negotiations know about the Latin American way of looking at Earth as Pachamama (Earth Mother). They also know how such a framing is just provided lip service and is ignored in the substantive portions of the negotiations. In The Nutmeg’s Curse, Ghosh explains why. He shows the extent of the vested interest in the oil economy – not only for oil-exporting countries, but also for a superpower like the US that controls oil drilling, oil prices and oil movement around the world. Many of us know power utilities are sabotaging decentralised solar power generation today because it hits their revenues and control. And how the other points of view are so often drowned out.

CAT/2023.3(RC)

Question. 3

On the basis of information in the passage, which one of the following is NOT a reason for the failure of policies seeking to address climate change?

Comprehension

The passage below is accompanied by a set of questions. Choose the best answer to each question.

The biggest challenge [The Nutmeg’s Curse by Ghosh] throws down is to the prevailing understanding of when the climate crisis started. Most of us have accepted . . . that it started with the widespread use of coal at the beginning of the Industrial Age in the 18th century and worsened with the mass adoption of oil and natural gas in the 20th . Ghosh takes this history at least three centuries back, to the start of European colonialism in the 15th century. He [starts] the book with a 1621 massacre by Dutch invaders determined to impose a monopoly on nutmeg cultivation and trade in the Banda islands in today’s Indonesia. Not only do the Dutch systematically depopulate the islands through genocide, they also try their best to bring nutmeg cultivation into plantation mode. These are the two points to which Ghosh returns through examples from around the world. One, how European colonialists decimated not only indigenous populations but also indigenous understanding of the relationship between humans and Earth. Two, how this was an invasion not only of humans but of the Earth itself, and how this continues to the present day by looking at nature as a ‘resource’ to exploit. . . . We know we are facing more frequent and more severe heatwaves, storms, floods, droughts and wildfires due to climate change. We know our expansion through deforestation, dam building, canal cutting – in short, terraforming, the word Ghosh uses – has brought us repeated disasters . . . Are these the responses of an angry Gaia who has finally had enough? By using the word ‘curse’ in the title, the author makes it clear that he thinks so. I use the pronoun ‘who’ knowingly, because Ghosh has quoted many non-European sources to enquire into the relationship between humans and the world around them so that he can question the prevalent way of looking at Earth as an inert object to be exploited to the maximum. As Ghosh’s text, notes and bibliography show once more, none of this is new. There have always been challenges to the way European colonialists looked at other civilisations and at Earth. It is just that the invaders and their myriad backers in the fields of economics, politics, anthropology, philosophy, literature, technology, physics, chemistry, biology have dominated global intellectual discourse. . . . There are other points of view that we can hear today if we listen hard enough. Those observing global climate negotiations know about the Latin American way of looking at Earth as Pachamama (Earth Mother). They also know how such a framing is just provided lip service and is ignored in the substantive portions of the negotiations. In The Nutmeg’s Curse, Ghosh explains why. He shows the extent of the vested interest in the oil economy – not only for oil-exporting countries, but also for a superpower like the US that controls oil drilling, oil prices and oil movement around the world. Many of us know power utilities are sabotaging decentralised solar power generation today because it hits their revenues and control. And how the other points of view are so often drowned out.

CAT/2023.3(RC)

Question. 4

Which one of the following best explains the primary purpose of the discussion of the colonisation of the Banda islands in “The Nutmeg’s Curse”?

Comprehension

The passage below is accompanied by a set of questions. Choose the best answer to each question.

As software improves, the people using it become less likely to sharpen their own know-how. Applications that offer lots of prompts and tips are often to blame; simpler, less solicitous programs push people harder to think, act and learn. Ten years ago, information scientists at Utrecht University in the Netherlands had a group of people carry out complicated analytical and planning tasks using either rudimentary software that provided no assistance or sophisticated software that offered a great deal of aid. The researchers found that the people using the simple software developed better strategies, made fewer mistakes and developed a deeper aptitude for the work. The people using the more advanced software, meanwhile, would often “aimlessly click around” when confronted with a tricky problem. The supposedly helpful software actually short-circuited their thinking and learning. [According to] philosopher Hubert Dreyfus . . . . our skills get sharper only through practice, when we use them regularly to overcome different sorts of difficult challenges. The goal of modern software, by contrast, is to ease our way through such challenges. Arduous, painstaking work is exactly what programmers are most eager to automate—after all, that is where the immediate efficiency gains tend to lie. In other words, a fundamental tension ripples between the interests of the people doing the automation and the interests of the people doing the work. Nevertheless, automation’s scope continues to widen. With the rise of electronic health records, physicians increasingly rely on software templates to guide them through patient exams. The programs incorporate valuable checklists and alerts, but they also make medicine more routinized and formulaic—and distance doctors from their patients. . . . Harvard Medical School professor Beth Lown, in a 2012 journal article . . . warned that when doctors become “screen-driven,” following a computer’s prompts rather than “the patient’s narrative thread,” their thinking can become constricted. In the worst cases, they may miss important diagnostic signals. . . . In a recent paper published in the journal Diagnosis, three medical researchers . . . examined the misdiagnosis of Thomas Eric Duncan, the first person to die of Ebola in the U.S., at Texas Health Presbyterian Hospital Dallas. They argue that the digital templates used by the hospital’s clinicians to record patient information probably helped to induce a kind of tunnel vision. “These highly constrained tools,” the researchers write, “are optimized for data capture but at the expense of sacrificing their utility for appropriate triage and diagnosis, leading users to miss the forest for the trees.” Medical software, they write, is no “replacement for basic history-taking, examination skills, and critical thinking.” . . . There is an alternative. In “human-centered automation,” the talents of people take precedence. . . . In this model, software plays an essential but secondary role. It takes over routine functions that a human operator has already mastered, issues alerts when unexpected situations arise, provides fresh information that expands the operator’s perspective and counters the biases that often distort human thinking. The technology becomes the expert’s partner, not the expert’s replacement.

CAT/2022.3(RC)

Question. 5

Which one of the following sets of words/phrases best serves as keywords to the passage?

Comprehension

The passage below is accompanied by a set of questions. Choose the best answer to each question.

As software improves, the people using it become less likely to sharpen their own know-how. Applications that offer lots of prompts and tips are often to blame; simpler, less solicitous programs push people harder to think, act and learn. Ten years ago, information scientists at Utrecht University in the Netherlands had a group of people carry out complicated analytical and planning tasks using either rudimentary software that provided no assistance or sophisticated software that offered a great deal of aid. The researchers found that the people using the simple software developed better strategies, made fewer mistakes and developed a deeper aptitude for the work. The people using the more advanced software, meanwhile, would often “aimlessly click around” when confronted with a tricky problem. The supposedly helpful software actually short-circuited their thinking and learning. [According to] philosopher Hubert Dreyfus . . . . our skills get sharper only through practice, when we use them regularly to overcome different sorts of difficult challenges. The goal of modern software, by contrast, is to ease our way through such challenges. Arduous, painstaking work is exactly what programmers are most eager to automate—after all, that is where the immediate efficiency gains tend to lie. In other words, a fundamental tension ripples between the interests of the people doing the automation and the interests of the people doing the work. Nevertheless, automation’s scope continues to widen. With the rise of electronic health records, physicians increasingly rely on software templates to guide them through patient exams. The programs incorporate valuable checklists and alerts, but they also make medicine more routinized and formulaic—and distance doctors from their patients. . . . Harvard Medical School professor Beth Lown, in a 2012 journal article . . . warned that when doctors become “screen-driven,” following a computer’s prompts rather than “the patient’s narrative thread,” their thinking can become constricted. In the worst cases, they may miss important diagnostic signals. . . . In a recent paper published in the journal Diagnosis, three medical researchers . . . examined the misdiagnosis of Thomas Eric Duncan, the first person to die of Ebola in the U.S., at Texas Health Presbyterian Hospital Dallas. They argue that the digital templates used by the hospital’s clinicians to record patient information probably helped to induce a kind of tunnel vision. “These highly constrained tools,” the researchers write, “are optimized for data capture but at the expense of sacrificing their utility for appropriate triage and diagnosis, leading users to miss the forest for the trees.” Medical software, they write, is no “replacement for basic history-taking, examination skills, and critical thinking.” . . . There is an alternative. In “human-centered automation,” the talents of people take precedence. . . . In this model, software plays an essential but secondary role. It takes over routine functions that a human operator has already mastered, issues alerts when unexpected situations arise, provides fresh information that expands the operator’s perspective and counters the biases that often distort human thinking. The technology becomes the expert’s partner, not the expert’s replacement.

CAT/2022.3(RC)

Question. 6

The author claims that, “The apparent veil between the organic and the manufactured has crumpled to reveal that the two really are, and have always been, of one being.” Which one of the following statements best expresses the point being made by the author here?

Comprehension

The passage below is accompanied by a set of questions. Choose the best answer to each question.

As software improves, the people using it become less likely to sharpen their own know-how. Applications that offer lots of prompts and tips are often to blame; simpler, less solicitous programs push people harder to think, act and learn. Ten years ago, information scientists at Utrecht University in the Netherlands had a group of people carry out complicated analytical and planning tasks using either rudimentary software that provided no assistance or sophisticated software that offered a great deal of aid. The researchers found that the people using the simple software developed better strategies, made fewer mistakes and developed a deeper aptitude for the work. The people using the more advanced software, meanwhile, would often “aimlessly click around” when confronted with a tricky problem. The supposedly helpful software actually short-circuited their thinking and learning. [According to] philosopher Hubert Dreyfus . . . . our skills get sharper only through practice, when we use them regularly to overcome different sorts of difficult challenges. The goal of modern software, by contrast, is to ease our way through such challenges. Arduous, painstaking work is exactly what programmers are most eager to automate—after all, that is where the immediate efficiency gains tend to lie. In other words, a fundamental tension ripples between the interests of the people doing the automation and the interests of the people doing the work. Nevertheless, automation’s scope continues to widen. With the rise of electronic health records, physicians increasingly rely on software templates to guide them through patient exams. The programs incorporate valuable checklists and alerts, but they also make medicine more routinized and formulaic—and distance doctors from their patients. . . . Harvard Medical School professor Beth Lown, in a 2012 journal article . . . warned that when doctors become “screen-driven,” following a computer’s prompts rather than “the patient’s narrative thread,” their thinking can become constricted. In the worst cases, they may miss important diagnostic signals. . . . In a recent paper published in the journal Diagnosis, three medical researchers . . . examined the misdiagnosis of Thomas Eric Duncan, the first person to die of Ebola in the U.S., at Texas Health Presbyterian Hospital Dallas. They argue that the digital templates used by the hospital’s clinicians to record patient information probably helped to induce a kind of tunnel vision. “These highly constrained tools,” the researchers write, “are optimized for data capture but at the expense of sacrificing their utility for appropriate triage and diagnosis, leading users to miss the forest for the trees.” Medical software, they write, is no “replacement for basic history-taking, examination skills, and critical thinking.” . . . There is an alternative. In “human-centered automation,” the talents of people take precedence. . . . In this model, software plays an essential but secondary role. It takes over routine functions that a human operator has already mastered, issues alerts when unexpected situations arise, provides fresh information that expands the operator’s perspective and counters the biases that often distort human thinking. The technology becomes the expert’s partner, not the expert’s replacement.

CAT/2022.3(RC)

Question. 7

None of the following statements is implied by the arguments of the passage, EXCEPT:

Comprehension

The passage below is accompanied by a set of questions. Choose the best answer to each question.

As software improves, the people using it become less likely to sharpen their own know-how. Applications that offer lots of prompts and tips are often to blame; simpler, less solicitous programs push people harder to think, act and learn. Ten years ago, information scientists at Utrecht University in the Netherlands had a group of people carry out complicated analytical and planning tasks using either rudimentary software that provided no assistance or sophisticated software that offered a great deal of aid. The researchers found that the people using the simple software developed better strategies, made fewer mistakes and developed a deeper aptitude for the work. The people using the more advanced software, meanwhile, would often “aimlessly click around” when confronted with a tricky problem. The supposedly helpful software actually short-circuited their thinking and learning. [According to] philosopher Hubert Dreyfus . . . . our skills get sharper only through practice, when we use them regularly to overcome different sorts of difficult challenges. The goal of modern software, by contrast, is to ease our way through such challenges. Arduous, painstaking work is exactly what programmers are most eager to automate—after all, that is where the immediate efficiency gains tend to lie. In other words, a fundamental tension ripples between the interests of the people doing the automation and the interests of the people doing the work. Nevertheless, automation’s scope continues to widen. With the rise of electronic health records, physicians increasingly rely on software templates to guide them through patient exams. The programs incorporate valuable checklists and alerts, but they also make medicine more routinized and formulaic—and distance doctors from their patients. . . . Harvard Medical School professor Beth Lown, in a 2012 journal article . . . warned that when doctors become “screen-driven,” following a computer’s prompts rather than “the patient’s narrative thread,” their thinking can become constricted. In the worst cases, they may miss important diagnostic signals. . . . In a recent paper published in the journal Diagnosis, three medical researchers . . . examined the misdiagnosis of Thomas Eric Duncan, the first person to die of Ebola in the U.S., at Texas Health Presbyterian Hospital Dallas. They argue that the digital templates used by the hospital’s clinicians to record patient information probably helped to induce a kind of tunnel vision. “These highly constrained tools,” the researchers write, “are optimized for data capture but at the expense of sacrificing their utility for appropriate triage and diagnosis, leading users to miss the forest for the trees.” Medical software, they write, is no “replacement for basic history-taking, examination skills, and critical thinking.” . . . There is an alternative. In “human-centered automation,” the talents of people take precedence. . . . In this model, software plays an essential but secondary role. It takes over routine functions that a human operator has already mastered, issues alerts when unexpected situations arise, provides fresh information that expands the operator’s perspective and counters the biases that often distort human thinking. The technology becomes the expert’s partner, not the expert’s replacement.

CAT/2022.3(RC)

Question. 8

The author claims that, “Part of this bionic convergence is a matter of words”. Which one of the following statements best expresses the point being made by the author?

Comprehension

The passage below is accompanied by a set of questions. Choose the best answer to each question.

[Octopuses are] misfits in their own extended families . . . They belong to the Mollusca class Cephalopoda. But they don’t look like their cousins at all. Other molluscs include sea snails, sea slugs, bivalves – most are shelled invertebrates with a dorsal foot. Cephalopods are all arms, and can be as tiny as 1 centimetre and as large at 30 feet. Some of them have brains the size of a walnut, which is large for an invertebrate. . . . It makes sense for these molluscs to have added protection in the form of a higher cognition; they don’t have a shell covering them, and pretty much everything feeds on cephalopods, including humans. But how did cephalopods manage to secure their own invisibility cloak? Cephalopods fire from multiple cylinders to achieve this in varying degrees from species to species. There are four main catalysts – chromatophores, iridophores, papillae and leucophores. . . . [Chromatophores] are organs on their bodies that contain pigment sacs, which have red, yellow and brown pigment granules. These sacs have a network of radial muscles, meaning muscles arranged in a circle radiating outwards. These are connected to the brain by a nerve. When the cephalopod wants to change colour, the brain carries an electrical impulse through the nerve to the muscles that expand outwards, pulling open the sacs to display the colours on the skin. Why these three colours? Because these are the colours the light reflects at the depths they live in (the rest is absorbed before it reaches those depths). . . . Well, what about other colours? Cue the iridophores. Think of a second level of skin that has thin stacks of cells. These can reflect light back at different wavelengths. . . . It’s using the same properties that we’ve seen in hologram stickers, or rainbows on puddles of oil. You move your head and you see a different colour. The sticker isn’t doing anything but reflecting light – it’s your movement that’s changing the appearance of the colour. This property of holograms, oil and other such surfaces is called “iridescence”. . . . Papillae are sections of the skin that can be deformed to make a texture bumpy. Even humans possess them (goosebumps) but cannot use them in the manner that cephalopods can. For instance, the use of these cells is how an octopus can wrap itself over a rock and appear jagged or how a squid or cuttlefish can imitate the look of a coral reef by growing miniature towers on its skin. It actually matches the texture of the substrate it chooses. Finally, the leucophores: According to a paper, published in Nature, cuttlefish and octopuses possess an additional type of reflector cell called a leucophore. They are cells that scatter full spectrum light so that they appear white in a similar way that a polar bear’s fur appears white. Leucophores will also reflect any filtered light shown on them . . . If the water appears blue at a certain depth, the octopuses and cuttlefish can appear blue; if the water appears green, they appear green, and so on and so forth.

CAT/2022.2(RC)

Question. 9

Which one of the following statements is not true about the camouflaging ability of Cephalopods?

Comprehension

The passage below is accompanied by a set of questions. Choose the best answer to each question.

[Octopuses are] misfits in their own extended families . . . They belong to the Mollusca class Cephalopoda. But they don’t look like their cousins at all. Other molluscs include sea snails, sea slugs, bivalves – most are shelled invertebrates with a dorsal foot. Cephalopods are all arms, and can be as tiny as 1 centimetre and as large at 30 feet. Some of them have brains the size of a walnut, which is large for an invertebrate. . . . It makes sense for these molluscs to have added protection in the form of a higher cognition; they don’t have a shell covering them, and pretty much everything feeds on cephalopods, including humans. But how did cephalopods manage to secure their own invisibility cloak? Cephalopods fire from multiple cylinders to achieve this in varying degrees from species to species. There are four main catalysts – chromatophores, iridophores, papillae and leucophores. . . . [Chromatophores] are organs on their bodies that contain pigment sacs, which have red, yellow and brown pigment granules. These sacs have a network of radial muscles, meaning muscles arranged in a circle radiating outwards. These are connected to the brain by a nerve. When the cephalopod wants to change colour, the brain carries an electrical impulse through the nerve to the muscles that expand outwards, pulling open the sacs to display the colours on the skin. Why these three colours? Because these are the colours the light reflects at the depths they live in (the rest is absorbed before it reaches those depths). . . . Well, what about other colours? Cue the iridophores. Think of a second level of skin that has thin stacks of cells. These can reflect light back at different wavelengths. . . . It’s using the same properties that we’ve seen in hologram stickers, or rainbows on puddles of oil. You move your head and you see a different colour. The sticker isn’t doing anything but reflecting light – it’s your movement that’s changing the appearance of the colour. This property of holograms, oil and other such surfaces is called “iridescence”. . . . Papillae are sections of the skin that can be deformed to make a texture bumpy. Even humans possess them (goosebumps) but cannot use them in the manner that cephalopods can. For instance, the use of these cells is how an octopus can wrap itself over a rock and appear jagged or how a squid or cuttlefish can imitate the look of a coral reef by growing miniature towers on its skin. It actually matches the texture of the substrate it chooses. Finally, the leucophores: According to a paper, published in Nature, cuttlefish and octopuses possess an additional type of reflector cell called a leucophore. They are cells that scatter full spectrum light so that they appear white in a similar way that a polar bear’s fur appears white. Leucophores will also reflect any filtered light shown on them . . . If the water appears blue at a certain depth, the octopuses and cuttlefish can appear blue; if the water appears green, they appear green, and so on and so forth.

CAT/2022.2(RC)

Question. 10

All of the following are reasons for octopuses being “misfits” EXCEPT that they:

Comprehension

The passage below is accompanied by a set of questions. Choose the best answer to each question.

[Octopuses are] misfits in their own extended families . . . They belong to the Mollusca class Cephalopoda. But they don’t look like their cousins at all. Other molluscs include sea snails, sea slugs, bivalves – most are shelled invertebrates with a dorsal foot. Cephalopods are all arms, and can be as tiny as 1 centimetre and as large at 30 feet. Some of them have brains the size of a walnut, which is large for an invertebrate. . . . It makes sense for these molluscs to have added protection in the form of a higher cognition; they don’t have a shell covering them, and pretty much everything feeds on cephalopods, including humans. But how did cephalopods manage to secure their own invisibility cloak? Cephalopods fire from multiple cylinders to achieve this in varying degrees from species to species. There are four main catalysts – chromatophores, iridophores, papillae and leucophores. . . . [Chromatophores] are organs on their bodies that contain pigment sacs, which have red, yellow and brown pigment granules. These sacs have a network of radial muscles, meaning muscles arranged in a circle radiating outwards. These are connected to the brain by a nerve. When the cephalopod wants to change colour, the brain carries an electrical impulse through the nerve to the muscles that expand outwards, pulling open the sacs to display the colours on the skin. Why these three colours? Because these are the colours the light reflects at the depths they live in (the rest is absorbed before it reaches those depths). . . . Well, what about other colours? Cue the iridophores. Think of a second level of skin that has thin stacks of cells. These can reflect light back at different wavelengths. . . . It’s using the same properties that we’ve seen in hologram stickers, or rainbows on puddles of oil. You move your head and you see a different colour. The sticker isn’t doing anything but reflecting light – it’s your movement that’s changing the appearance of the colour. This property of holograms, oil and other such surfaces is called “iridescence”. . . . Papillae are sections of the skin that can be deformed to make a texture bumpy. Even humans possess them (goosebumps) but cannot use them in the manner that cephalopods can. For instance, the use of these cells is how an octopus can wrap itself over a rock and appear jagged or how a squid or cuttlefish can imitate the look of a coral reef by growing miniature towers on its skin. It actually matches the texture of the substrate it chooses. Finally, the leucophores: According to a paper, published in Nature, cuttlefish and octopuses possess an additional type of reflector cell called a leucophore. They are cells that scatter full spectrum light so that they appear white in a similar way that a polar bear’s fur appears white. Leucophores will also reflect any filtered light shown on them . . . If the water appears blue at a certain depth, the octopuses and cuttlefish can appear blue; if the water appears green, they appear green, and so on and so forth.

CAT/2022.2(RC)

Question. 11

Based on the passage, it can be inferred that camouflaging techniques in an octopus are most dissimilar to those in:

Comprehension

The passage below is accompanied by a set of questions. Choose the best answer to each question.

[Octopuses are] misfits in their own extended families . . . They belong to the Mollusca class Cephalopoda. But they don’t look like their cousins at all. Other molluscs include sea snails, sea slugs, bivalves – most are shelled invertebrates with a dorsal foot. Cephalopods are all arms, and can be as tiny as 1 centimetre and as large at 30 feet. Some of them have brains the size of a walnut, which is large for an invertebrate. . . . It makes sense for these molluscs to have added protection in the form of a higher cognition; they don’t have a shell covering them, and pretty much everything feeds on cephalopods, including humans. But how did cephalopods manage to secure their own invisibility cloak? Cephalopods fire from multiple cylinders to achieve this in varying degrees from species to species. There are four main catalysts – chromatophores, iridophores, papillae and leucophores. . . . [Chromatophores] are organs on their bodies that contain pigment sacs, which have red, yellow and brown pigment granules. These sacs have a network of radial muscles, meaning muscles arranged in a circle radiating outwards. These are connected to the brain by a nerve. When the cephalopod wants to change colour, the brain carries an electrical impulse through the nerve to the muscles that expand outwards, pulling open the sacs to display the colours on the skin. Why these three colours? Because these are the colours the light reflects at the depths they live in (the rest is absorbed before it reaches those depths). . . . Well, what about other colours? Cue the iridophores. Think of a second level of skin that has thin stacks of cells. These can reflect light back at different wavelengths. . . . It’s using the same properties that we’ve seen in hologram stickers, or rainbows on puddles of oil. You move your head and you see a different colour. The sticker isn’t doing anything but reflecting light – it’s your movement that’s changing the appearance of the colour. This property of holograms, oil and other such surfaces is called “iridescence”. . . . Papillae are sections of the skin that can be deformed to make a texture bumpy. Even humans possess them (goosebumps) but cannot use them in the manner that cephalopods can. For instance, the use of these cells is how an octopus can wrap itself over a rock and appear jagged or how a squid or cuttlefish can imitate the look of a coral reef by growing miniature towers on its skin. It actually matches the texture of the substrate it chooses. Finally, the leucophores: According to a paper, published in Nature, cuttlefish and octopuses possess an additional type of reflector cell called a leucophore. They are cells that scatter full spectrum light so that they appear white in a similar way that a polar bear’s fur appears white. Leucophores will also reflect any filtered light shown on them . . . If the water appears blue at a certain depth, the octopuses and cuttlefish can appear blue; if the water appears green, they appear green, and so on and so forth.

CAT/2022.2(RC)

Question. 12

Based on the passage, we can infer that all of the following statements, if true, would weaken the camouflaging adeptness of Cephalopods EXCEPT:

Comprehension

The passage below is accompanied by a set of questions. Choose the best answer to each question.

We cannot travel outside our neighbourhood without passports. We must wear the same plain clothes. We must exchange our houses every ten years. We cannot avoid labour. We all go to bed at the same time . . . We have religious freedom, but we cannot deny that the soul dies with the body, since ‘but for the fear of punishment, they would have nothing but contempt for the laws and customs of society'. . . . In More’s time, for much of the population, given the plenty and security on offer, such restraints would not have seemed overly unreasonable. For modern readers, however, Utopia appears to rely upon relentless transparency, the repression of variety, and the curtailment of privacy. Utopia provides security: but at what price? In both its external and internal relations, indeed, it seems perilously dystopian.

Such a conclusion might be fortified by examining selectively the tradition which follows More on these points. This often portrays societies where . . . 'it would be almost impossible for man to be depraved, or wicked'. . . . This is achieved both through institutions and mores, which underpin the common life. . . . The passions are regulated and inequalities of wealth and distinction are minimized. Needs, vanity, and emulation are restrained, often by prizing equality and holding riches in contempt. The desire for public power is curbed. Marriage and sexual intercourse are often controlled: in Tommaso Campanella’s The City of the Sun (1623), the first great literary utopia after More’s, relations are forbidden to men before the age of twenty-one and women before nineteen. Communal child-rearing is normal; for Campanella this commences at age two. Greater simplicity of life, ‘living according to nature’, is often a result: the desire for simplicity and purity are closely related. People become more alike in appearance, opinion, and outlook than they often have been. Unity, order, and homogeneity thus prevail at the cost of individuality and diversity. This model, as J. C. Davis demonstrates, dominated early modern utopianism. . . . And utopian homogeneity remains a familiar theme well into the twentieth century.

Given these considerations, it is not unreasonable to take as our starting point here the hypothesis that utopia and dystopia evidently share more in common than is often supposed. Indeed, they might be twins, the progeny of the same parents. Insofar as this proves to be the case, my linkage of both here will be uncomfortably close for some readers. Yet we should not mistake this argument for the assertion that all utopias are, or tend to produce, dystopias. Those who defend this proposition will find that their association here is not nearly close enough. For we have only to acknowledge the existence of thousands of successful intentional communities in which a cooperative ethos predominates and where harmony without coercion is the rule to set aside such an assertion. Here the individual’s submersion in the group is consensual (though this concept is not unproblematic). It results not in enslavement but voluntary submission to group norms. Harmony is achieved without . . . harming others.

CAT/2021.1(RC)

Question. 13

Which sequence of words below best captures the narrative of the passage?

Comprehension

The passage below is accompanied by a set of questions. Choose the best answer to each question.

We cannot travel outside our neighbourhood without passports. We must wear the same plain clothes. We must exchange our houses every ten years. We cannot avoid labour. We all go to bed at the same time . . . We have religious freedom, but we cannot deny that the soul dies with the body, since ‘but for the fear of punishment, they would have nothing but contempt for the laws and customs of society'. . . . In More’s time, for much of the population, given the plenty and security on offer, such restraints would not have seemed overly unreasonable. For modern readers, however, Utopia appears to rely upon relentless transparency, the repression of variety, and the curtailment of privacy. Utopia provides security: but at what price? In both its external and internal relations, indeed, it seems perilously dystopian.

Such a conclusion might be fortified by examining selectively the tradition which follows More on these points. This often portrays societies where . . . 'it would be almost impossible for man to be depraved, or wicked'. . . . This is achieved both through institutions and mores, which underpin the common life. . . . The passions are regulated and inequalities of wealth and distinction are minimized. Needs, vanity, and emulation are restrained, often by prizing equality and holding riches in contempt. The desire for public power is curbed. Marriage and sexual intercourse are often controlled: in Tommaso Campanella’s The City of the Sun (1623), the first great literary utopia after More’s, relations are forbidden to men before the age of twenty-one and women before nineteen. Communal child-rearing is normal; for Campanella this commences at age two. Greater simplicity of life, ‘living according to nature’, is often a result: the desire for simplicity and purity are closely related. People become more alike in appearance, opinion, and outlook than they often have been. Unity, order, and homogeneity thus prevail at the cost of individuality and diversity. This model, as J. C. Davis demonstrates, dominated early modern utopianism. . . . And utopian homogeneity remains a familiar theme well into the twentieth century.

Given these considerations, it is not unreasonable to take as our starting point here the hypothesis that utopia and dystopia evidently share more in common than is often supposed. Indeed, they might be twins, the progeny of the same parents. Insofar as this proves to be the case, my linkage of both here will be uncomfortably close for some readers. Yet we should not mistake this argument for the assertion that all utopias are, or tend to produce, dystopias. Those who defend this proposition will find that their association here is not nearly close enough. For we have only to acknowledge the existence of thousands of successful intentional communities in which a cooperative ethos predominates and where harmony without coercion is the rule to set aside such an assertion. Here the individual’s submersion in the group is consensual (though this concept is not unproblematic). It results not in enslavement but voluntary submission to group norms. Harmony is achieved without . . . harming others.

CAT/2021.1(RC)

Question. 14

Following from the passage, which one of the following may be seen as a characteristic of a utopian society?

Comprehension

The passage below is accompanied by a set of questions. Choose the best answer to each question.

We cannot travel outside our neighbourhood without passports. We must wear the same plain clothes. We must exchange our houses every ten years. We cannot avoid labour. We all go to bed at the same time . . . We have religious freedom, but we cannot deny that the soul dies with the body, since ‘but for the fear of punishment, they would have nothing but contempt for the laws and customs of society'. . . . In More’s time, for much of the population, given the plenty and security on offer, such restraints would not have seemed overly unreasonable. For modern readers, however, Utopia appears to rely upon relentless transparency, the repression of variety, and the curtailment of privacy. Utopia provides security: but at what price? In both its external and internal relations, indeed, it seems perilously dystopian.

Such a conclusion might be fortified by examining selectively the tradition which follows More on these points. This often portrays societies where . . . 'it would be almost impossible for man to be depraved, or wicked'. . . . This is achieved both through institutions and mores, which underpin the common life. . . . The passions are regulated and inequalities of wealth and distinction are minimized. Needs, vanity, and emulation are restrained, often by prizing equality and holding riches in contempt. The desire for public power is curbed. Marriage and sexual intercourse are often controlled: in Tommaso Campanella’s The City of the Sun (1623), the first great literary utopia after More’s, relations are forbidden to men before the age of twenty-one and women before nineteen. Communal child-rearing is normal; for Campanella this commences at age two. Greater simplicity of life, ‘living according to nature’, is often a result: the desire for simplicity and purity are closely related. People become more alike in appearance, opinion, and outlook than they often have been. Unity, order, and homogeneity thus prevail at the cost of individuality and diversity. This model, as J. C. Davis demonstrates, dominated early modern utopianism. . . . And utopian homogeneity remains a familiar theme well into the twentieth century.

Given these considerations, it is not unreasonable to take as our starting point here the hypothesis that utopia and dystopia evidently share more in common than is often supposed. Indeed, they might be twins, the progeny of the same parents. Insofar as this proves to be the case, my linkage of both here will be uncomfortably close for some readers. Yet we should not mistake this argument for the assertion that all utopias are, or tend to produce, dystopias. Those who defend this proposition will find that their association here is not nearly close enough. For we have only to acknowledge the existence of thousands of successful intentional communities in which a cooperative ethos predominates and where harmony without coercion is the rule to set aside such an assertion. Here the individual’s submersion in the group is consensual (though this concept is not unproblematic). It results not in enslavement but voluntary submission to group norms. Harmony is achieved without . . . harming others.

CAT/2021.1(RC)

Question. 15

All of the following statements can be inferred from the passage EXCEPT that:

Comprehension

The passage below is accompanied by a set of questions. Choose the best answer to each question.

We cannot travel outside our neighbourhood without passports. We must wear the same plain clothes. We must exchange our houses every ten years. We cannot avoid labour. We all go to bed at the same time . . . We have religious freedom, but we cannot deny that the soul dies with the body, since ‘but for the fear of punishment, they would have nothing but contempt for the laws and customs of society'. . . . In More’s time, for much of the population, given the plenty and security on offer, such restraints would not have seemed overly unreasonable. For modern readers, however, Utopia appears to rely upon relentless transparency, the repression of variety, and the curtailment of privacy. Utopia provides security: but at what price? In both its external and internal relations, indeed, it seems perilously dystopian.

Such a conclusion might be fortified by examining selectively the tradition which follows More on these points. This often portrays societies where . . . 'it would be almost impossible for man to be depraved, or wicked'. . . . This is achieved both through institutions and mores, which underpin the common life. . . . The passions are regulated and inequalities of wealth and distinction are minimized. Needs, vanity, and emulation are restrained, often by prizing equality and holding riches in contempt. The desire for public power is curbed. Marriage and sexual intercourse are often controlled: in Tommaso Campanella’s The City of the Sun (1623), the first great literary utopia after More’s, relations are forbidden to men before the age of twenty-one and women before nineteen. Communal child-rearing is normal; for Campanella this commences at age two. Greater simplicity of life, ‘living according to nature’, is often a result: the desire for simplicity and purity are closely related. People become more alike in appearance, opinion, and outlook than they often have been. Unity, order, and homogeneity thus prevail at the cost of individuality and diversity. This model, as J. C. Davis demonstrates, dominated early modern utopianism. . . . And utopian homogeneity remains a familiar theme well into the twentieth century.

Given these considerations, it is not unreasonable to take as our starting point here the hypothesis that utopia and dystopia evidently share more in common than is often supposed. Indeed, they might be twins, the progeny of the same parents. Insofar as this proves to be the case, my linkage of both here will be uncomfortably close for some readers. Yet we should not mistake this argument for the assertion that all utopias are, or tend to produce, dystopias. Those who defend this proposition will find that their association here is not nearly close enough. For we have only to acknowledge the existence of thousands of successful intentional communities in which a cooperative ethos predominates and where harmony without coercion is the rule to set aside such an assertion. Here the individual’s submersion in the group is consensual (though this concept is not unproblematic). It results not in enslavement but voluntary submission to group norms. Harmony is achieved without . . . harming others.

CAT/2021.1(RC)

Question. 16

All of the following arguments are made in the passage EXCEPT that:

Comprehension

The passage below is accompanied by a set of questions. Choose the best answer to each question.

It has been said that knowledge, or the problem of knowledge, is the scandal of philosophy. The scandal is philosophy’s apparent inability to show how, when and why we can be sure that we know something or, indeed, that we know anything. Philosopher Michael Williams writes: ‘Is it possible to obtain knowledge at all? This problem is pressing because there are powerful arguments, some very ancient, for the conclusion that it is not . . . Scepticism is the skeleton in Western rationalism’s closet’. While it is not clear that the scandal matters to anyone but philosophers, philosophers point out that it should matter to everyone, at least given a certain conception of knowledge. For, they explain, unless we can ground our claims to knowledge as such, which is to say, distinguish it from mere opinion, superstition, fantasy, wishful thinking, ideology, illusion or delusion, then the actions we take on the basis of presumed knowledge – boarding an airplane, swallowing a pill, finding someone guilty of a crime – will be irrational
and unjustifiable.

That is all quite serious-sounding but so also are the rattlings of the skeleton: that is, the sceptic’s contention that we cannot be sure that we know anything – at least not if we think of knowledge as something like having a correct mental representation of reality, and not if we think of reality as something like things-as-they-are-in-themselves, independent of our perceptions, ideas or descriptions. For, the sceptic will note, since reality, under that conception of it, is outside our ken (we cannot catch a glimpse of things-in-themselves around the corner of our own eyes; we cannot form an idea of reality that floats above the processes of our conceiving it), we have no way to compare our mental representations with things-as-they-are-in-themselves and therefore no way to determine whether they are correct or incorrect. Thus the sceptic may repeat (rattling loudly), you cannot be sure you ‘know’
something or anything at all – at least not, he may add (rattling softly before disappearing), if that is the way you conceive ‘knowledge’.

There are a number of ways to handle this situation. The most common is to ignore it. Most people outside the academy – and, indeed, most of us inside it – are unaware of or unperturbed by the philosophical scandal of knowledge and go about our lives without too many epistemic anxieties. We hold our beliefs and presumptive knowledges more or less confidently, usually depending on how we acquired them (I saw it with my own eyes; I heard it on Fox News; a guy at the office told me) and how broadly and strenuously they seem to be shared or endorsed by various relevant people: experts and authorities, friends and family members, colleagues and associates. And we examine our convictions more or less closely, explain them more or less extensively, and defend them more or less vigorously, usually depending on what seems to be at stake for ourselves and/or other people and what resources are available for reassuring ourselves or making our beliefs credible to others (look, it’s right here on the page; add up the figures yourself; I happen to be a heart specialist).

CAT/2021.2(RC)

Question. 17

The author of the passage is most likely to support which one of the following statements?

Comprehension

The passage below is accompanied by a set of questions. Choose the best answer to each question.

It has been said that knowledge, or the problem of knowledge, is the scandal of philosophy. The scandal is philosophy’s apparent inability to show how, when and why we can be sure that we know something or, indeed, that we know anything. Philosopher Michael Williams writes: ‘Is it possible to obtain knowledge at all? This problem is pressing because there are powerful arguments, some very ancient, for the conclusion that it is not . . . Scepticism is the skeleton in Western rationalism’s closet’. While it is not clear that the scandal matters to anyone but philosophers, philosophers point out that it should matter to everyone, at least given a certain conception of knowledge. For, they explain, unless we can ground our claims to knowledge as such, which is to say, distinguish it from mere opinion, superstition, fantasy, wishful thinking, ideology, illusion or delusion, then the actions we take on the basis of presumed knowledge – boarding an airplane, swallowing a pill, finding someone guilty of a crime – will be irrational
and unjustifiable.

That is all quite serious-sounding but so also are the rattlings of the skeleton: that is, the sceptic’s contention that we cannot be sure that we know anything – at least not if we think of knowledge as something like having a correct mental representation of reality, and not if we think of reality as something like things-as-they-are-in-themselves, independent of our perceptions, ideas or descriptions. For, the sceptic will note, since reality, under that conception of it, is outside our ken (we cannot catch a glimpse of things-in-themselves around the corner of our own eyes; we cannot form an idea of reality that floats above the processes of our conceiving it), we have no way to compare our mental representations with things-as-they-are-in-themselves and therefore no way to determine whether they are correct or incorrect. Thus the sceptic may repeat (rattling loudly), you cannot be sure you ‘know’
something or anything at all – at least not, he may add (rattling softly before disappearing), if that is the way you conceive ‘knowledge’.

There are a number of ways to handle this situation. The most common is to ignore it. Most people outside the academy – and, indeed, most of us inside it – are unaware of or unperturbed by the philosophical scandal of knowledge and go about our lives without too many epistemic anxieties. We hold our beliefs and presumptive knowledges more or less confidently, usually depending on how we acquired them (I saw it with my own eyes; I heard it on Fox News; a guy at the office told me) and how broadly and strenuously they seem to be shared or endorsed by various relevant people: experts and authorities, friends and family members, colleagues and associates. And we examine our convictions more or less closely, explain them more or less extensively, and defend them more or less vigorously, usually depending on what seems to be at stake for ourselves and/or other people and what resources are available for reassuring ourselves or making our beliefs credible to others (look, it’s right here on the page; add up the figures yourself; I happen to be a heart specialist).

CAT/2021.2(RC)

Question. 18

“. . . we cannot catch a glimpse of things-in-themselves around the corner of our own eyes; we cannot form an idea of reality that floats above the processes of our conceiving it . . .” Which one of the following statements best reflects the argument being made in this sentence?

Comprehension

The passage below is accompanied by a set of questions. Choose the best answer to each question.

It has been said that knowledge, or the problem of knowledge, is the scandal of philosophy. The scandal is philosophy’s apparent inability to show how, when and why we can be sure that we know something or, indeed, that we know anything. Philosopher Michael Williams writes: ‘Is it possible to obtain knowledge at all? This problem is pressing because there are powerful arguments, some very ancient, for the conclusion that it is not . . . Scepticism is the skeleton in Western rationalism’s closet’. While it is not clear that the scandal matters to anyone but philosophers, philosophers point out that it should matter to everyone, at least given a certain conception of knowledge. For, they explain, unless we can ground our claims to knowledge as such, which is to say, distinguish it from mere opinion, superstition, fantasy, wishful thinking, ideology, illusion or delusion, then the actions we take on the basis of presumed knowledge – boarding an airplane, swallowing a pill, finding someone guilty of a crime – will be irrational
and unjustifiable.

That is all quite serious-sounding but so also are the rattlings of the skeleton: that is, the sceptic’s contention that we cannot be sure that we know anything – at least not if we think of knowledge as something like having a correct mental representation of reality, and not if we think of reality as something like things-as-they-are-in-themselves, independent of our perceptions, ideas or descriptions. For, the sceptic will note, since reality, under that conception of it, is outside our ken (we cannot catch a glimpse of things-in-themselves around the corner of our own eyes; we cannot form an idea of reality that floats above the processes of our conceiving it), we have no way to compare our mental representations with things-as-they-are-in-themselves and therefore no way to determine whether they are correct or incorrect. Thus the sceptic may repeat (rattling loudly), you cannot be sure you ‘know’
something or anything at all – at least not, he may add (rattling softly before disappearing), if that is the way you conceive ‘knowledge’.

There are a number of ways to handle this situation. The most common is to ignore it. Most people outside the academy – and, indeed, most of us inside it – are unaware of or unperturbed by the philosophical scandal of knowledge and go about our lives without too many epistemic anxieties. We hold our beliefs and presumptive knowledges more or less confidently, usually depending on how we acquired them (I saw it with my own eyes; I heard it on Fox News; a guy at the office told me) and how broadly and strenuously they seem to be shared or endorsed by various relevant people: experts and authorities, friends and family members, colleagues and associates. And we examine our convictions more or less closely, explain them more or less extensively, and defend them more or less vigorously, usually depending on what seems to be at stake for ourselves and/or other people and what resources are available for reassuring ourselves or making our beliefs credible to others (look, it’s right here on the page; add up the figures yourself; I happen to be a heart specialist).

CAT/2021.2(RC)

Question. 19

The author discusses all of the following arguments in the passage, EXCEPT:

Comprehension

The passage below is accompanied by a set of questions. Choose the best answer to each question.

It has been said that knowledge, or the problem of knowledge, is the scandal of philosophy. The scandal is philosophy’s apparent inability to show how, when and why we can be sure that we know something or, indeed, that we know anything. Philosopher Michael Williams writes: ‘Is it possible to obtain knowledge at all? This problem is pressing because there are powerful arguments, some very ancient, for the conclusion that it is not . . . Scepticism is the skeleton in Western rationalism’s closet’. While it is not clear that the scandal matters to anyone but philosophers, philosophers point out that it should matter to everyone, at least given a certain conception of knowledge. For, they explain, unless we can ground our claims to knowledge as such, which is to say, distinguish it from mere opinion, superstition, fantasy, wishful thinking, ideology, illusion or delusion, then the actions we take on the basis of presumed knowledge – boarding an airplane, swallowing a pill, finding someone guilty of a crime – will be irrational
and unjustifiable.

That is all quite serious-sounding but so also are the rattlings of the skeleton: that is, the sceptic’s contention that we cannot be sure that we know anything – at least not if we think of knowledge as something like having a correct mental representation of reality, and not if we think of reality as something like things-as-they-are-in-themselves, independent of our perceptions, ideas or descriptions. For, the sceptic will note, since reality, under that conception of it, is outside our ken (we cannot catch a glimpse of things-in-themselves around the corner of our own eyes; we cannot form an idea of reality that floats above the processes of our conceiving it), we have no way to compare our mental representations with things-as-they-are-in-themselves and therefore no way to determine whether they are correct or incorrect. Thus the sceptic may repeat (rattling loudly), you cannot be sure you ‘know’
something or anything at all – at least not, he may add (rattling softly before disappearing), if that is the way you conceive ‘knowledge’.

There are a number of ways to handle this situation. The most common is to ignore it. Most people outside the academy – and, indeed, most of us inside it – are unaware of or unperturbed by the philosophical scandal of knowledge and go about our lives without too many epistemic anxieties. We hold our beliefs and presumptive knowledges more or less confidently, usually depending on how we acquired them (I saw it with my own eyes; I heard it on Fox News; a guy at the office told me) and how broadly and strenuously they seem to be shared or endorsed by various relevant people: experts and authorities, friends and family members, colleagues and associates. And we examine our convictions more or less closely, explain them more or less extensively, and defend them more or less vigorously, usually depending on what seems to be at stake for ourselves and/or other people and what resources are available for reassuring ourselves or making our beliefs credible to others (look, it’s right here on the page; add up the figures yourself; I happen to be a heart specialist).

CAT/2021.2(RC)

Question. 20

According to the last paragraph of the passage, “We hold our beliefs and presumptive knowledge more or less confidently, usually depending on” something. Which one of the following most broadly captures what we depend on?

Comprehension

The passage below is accompanied by a set of questions. Choose the best answer to each question.

Back in the early 2000s, an awesome thing happened in the New X-Men comics. Our mutant heroes had been battling giant robots called Sentinels for years, but suddenly these mechanical overlords spawned a new threat: Nano-Sentinels! Not content to rule Earth with their metal fists, these tiny robots invaded our bodies at the microscopic level. Infected humans were slowly converted into machines, cell by cell.

Now, a new wave of extremely odd robots is making at least part of the Nano-Sentinels story come true. Using exotic fabrication materials like squishy hydrogels and elastic polymers, researchers are making autonomous devices that are often tiny and that could turn out to be more powerful than an army of Terminators. Some are 1-centimetre blobs that can skate over water. Others are flat sheets that can roll themselves into tubes, or matchstick-sized plastic coils that act as powerful muscles. No, they won’t be invading our bodies and turning us into Sentinels – which I personally find a little disappointing – but some of them could one day swim through our bloodstream to heal us. They could also clean up pollutants in water or fold themselves into different kinds of vehicles for us to drive. . . .

Unlike a traditional robot, which is made of mechanical parts, these new kinds of robots are made from molecular parts. The principle is the same: both are devices that can move around and do things independently. But a robot made from smart materials might be nothing more than a pink drop of hydrogel. Instead of gears and wires, it’s assembled from two kinds of molecules – some that love water and some that avoid it – which interact to allow the bot to skate on top of a pond.

Sometimes these materials are used to enhance more conventional robots. One team of researchers, for example, has developed a different kind of hydrogel that becomes sticky when exposed to a low-voltage zap of electricity and then stops being sticky when the electricity is switched off. This putty-like gel can be pasted right onto the feet or wheels of a robot. When the robot wants to climb a sheer wall or scoot across the ceiling, it can activate its sticky feet with a few volts. Once it is back on a flat surface again, the robot turns off the adhesive like a light switch.

Robots that are wholly or partly made of gloop aren’t the future that I was promised in science fiction. But it’s definitely the future I want. I’m especially keen on the nanometre-scale “soft robots” that could one day swim through our bodies. Metin Sitti, a director at the Max Planck Institute for Intelligent Systems in Germany, worked with colleagues to prototype these tiny, synthetic beasts using various stretchy materials, such as simple rubber, and seeding them with magnetic microparticles. They are assembled into a finished shape by applying magnetic fields. The results look like flowers or geometric shapes made from Tinkertoy ball and stick modelling kits. They’re guided through tubes of fluid using magnets, and can even stop and cling to the sides of a tube.

CAT/2021.3(RC)

Question. 21

Which one of the following statements best captures the sense of the first paragraph?

Comprehension

The passage below is accompanied by a set of questions. Choose the best answer to each question.

Back in the early 2000s, an awesome thing happened in the New X-Men comics. Our mutant heroes had been battling giant robots called Sentinels for years, but suddenly these mechanical overlords spawned a new threat: Nano-Sentinels! Not content to rule Earth with their metal fists, these tiny robots invaded our bodies at the microscopic level. Infected humans were slowly converted into machines, cell by cell.

Now, a new wave of extremely odd robots is making at least part of the Nano-Sentinels story come true. Using exotic fabrication materials like squishy hydrogels and elastic polymers, researchers are making autonomous devices that are often tiny and that could turn out to be more powerful than an army of Terminators. Some are 1-centimetre blobs that can skate over water. Others are flat sheets that can roll themselves into tubes, or matchstick-sized plastic coils that act as powerful muscles. No, they won’t be invading our bodies and turning us into Sentinels – which I personally find a little disappointing – but some of them could one day swim through our bloodstream to heal us. They could also clean up pollutants in water or fold themselves into different kinds of vehicles for us to drive. . . .

Unlike a traditional robot, which is made of mechanical parts, these new kinds of robots are made from molecular parts. The principle is the same: both are devices that can move around and do things independently. But a robot made from smart materials might be nothing more than a pink drop of hydrogel. Instead of gears and wires, it’s assembled from two kinds of molecules – some that love water and some that avoid it – which interact to allow the bot to skate on top of a pond.

Sometimes these materials are used to enhance more conventional robots. One team of researchers, for example, has developed a different kind of hydrogel that becomes sticky when exposed to a low-voltage zap of electricity and then stops being sticky when the electricity is switched off. This putty-like gel can be pasted right onto the feet or wheels of a robot. When the robot wants to climb a sheer wall or scoot across the ceiling, it can activate its sticky feet with a few volts. Once it is back on a flat surface again, the robot turns off the adhesive like a light switch.

Robots that are wholly or partly made of gloop aren’t the future that I was promised in science fiction. But it’s definitely the future I want. I’m especially keen on the nanometre-scale “soft robots” that could one day swim through our bodies. Metin Sitti, a director at the Max Planck Institute for Intelligent Systems in Germany, worked with colleagues to prototype these tiny, synthetic beasts using various stretchy materials, such as simple rubber, and seeding them with magnetic microparticles. They are assembled into a finished shape by applying magnetic fields. The results look like flowers or geometric shapes made from Tinkertoy ball and stick modelling kits. They’re guided through tubes of fluid using magnets, and can even stop and cling to the sides of a tube.

CAT/2021.3(RC)

Question. 22

Which one of the following statements, if true, would be the most direct extension of the arguments in the passage?

Comprehension

The passage below is accompanied by a set of questions. Choose the best answer to each question.

Back in the early 2000s, an awesome thing happened in the New X-Men comics. Our mutant heroes had been battling giant robots called Sentinels for years, but suddenly these mechanical overlords spawned a new threat: Nano-Sentinels! Not content to rule Earth with their metal fists, these tiny robots invaded our bodies at the microscopic level. Infected humans were slowly converted into machines, cell by cell.

Now, a new wave of extremely odd robots is making at least part of the Nano-Sentinels story come true. Using exotic fabrication materials like squishy hydrogels and elastic polymers, researchers are making autonomous devices that are often tiny and that could turn out to be more powerful than an army of Terminators. Some are 1-centimetre blobs that can skate over water. Others are flat sheets that can roll themselves into tubes, or matchstick-sized plastic coils that act as powerful muscles. No, they won’t be invading our bodies and turning us into Sentinels – which I personally find a little disappointing – but some of them could one day swim through our bloodstream to heal us. They could also clean up pollutants in water or fold themselves into different kinds of vehicles for us to drive. . . .

Unlike a traditional robot, which is made of mechanical parts, these new kinds of robots are made from molecular parts. The principle is the same: both are devices that can move around and do things independently. But a robot made from smart materials might be nothing more than a pink drop of hydrogel. Instead of gears and wires, it’s assembled from two kinds of molecules – some that love water and some that avoid it – which interact to allow the bot to skate on top of a pond.

Sometimes these materials are used to enhance more conventional robots. One team of researchers, for example, has developed a different kind of hydrogel that becomes sticky when exposed to a low-voltage zap of electricity and then stops being sticky when the electricity is switched off. This putty-like gel can be pasted right onto the feet or wheels of a robot. When the robot wants to climb a sheer wall or scoot across the ceiling, it can activate its sticky feet with a few volts. Once it is back on a flat surface again, the robot turns off the adhesive like a light switch.

Robots that are wholly or partly made of gloop aren’t the future that I was promised in science fiction. But it’s definitely the future I want. I’m especially keen on the nanometre-scale “soft robots” that could one day swim through our bodies. Metin Sitti, a director at the Max Planck Institute for Intelligent Systems in Germany, worked with colleagues to prototype these tiny, synthetic beasts using various stretchy materials, such as simple rubber, and seeding them with magnetic microparticles. They are assembled into a finished shape by applying magnetic fields. The results look like flowers or geometric shapes made from Tinkertoy ball and stick modelling kits. They’re guided through tubes of fluid using magnets, and can even stop and cling to the sides of a tube.

CAT/2021.3(RC)

Question. 23

Which one of the following scenarios, if false, could be seen as supporting the passage?

Comprehension

The passage below is accompanied by a set of questions. Choose the best answer to each question.

Back in the early 2000s, an awesome thing happened in the New X-Men comics. Our mutant heroes had been battling giant robots called Sentinels for years, but suddenly these mechanical overlords spawned a new threat: Nano-Sentinels! Not content to rule Earth with their metal fists, these tiny robots invaded our bodies at the microscopic level. Infected humans were slowly converted into machines, cell by cell.

Now, a new wave of extremely odd robots is making at least part of the Nano-Sentinels story come true. Using exotic fabrication materials like squishy hydrogels and elastic polymers, researchers are making autonomous devices that are often tiny and that could turn out to be more powerful than an army of Terminators. Some are 1-centimetre blobs that can skate over water. Others are flat sheets that can roll themselves into tubes, or matchstick-sized plastic coils that act as powerful muscles. No, they won’t be invading our bodies and turning us into Sentinels – which I personally find a little disappointing – but some of them could one day swim through our bloodstream to heal us. They could also clean up pollutants in water or fold themselves into different kinds of vehicles for us to drive. . . .

Unlike a traditional robot, which is made of mechanical parts, these new kinds of robots are made from molecular parts. The principle is the same: both are devices that can move around and do things independently. But a robot made from smart materials might be nothing more than a pink drop of hydrogel. Instead of gears and wires, it’s assembled from two kinds of molecules – some that love water and some that avoid it – which interact to allow the bot to skate on top of a pond.

Sometimes these materials are used to enhance more conventional robots. One team of researchers, for example, has developed a different kind of hydrogel that becomes sticky when exposed to a low-voltage zap of electricity and then stops being sticky when the electricity is switched off. This putty-like gel can be pasted right onto the feet or wheels of a robot. When the robot wants to climb a sheer wall or scoot across the ceiling, it can activate its sticky feet with a few volts. Once it is back on a flat surface again, the robot turns off the adhesive like a light switch.

Robots that are wholly or partly made of gloop aren’t the future that I was promised in science fiction. But it’s definitely the future I want. I’m especially keen on the nanometre-scale “soft robots” that could one day swim through our bodies. Metin Sitti, a director at the Max Planck Institute for Intelligent Systems in Germany, worked with colleagues to prototype these tiny, synthetic beasts using various stretchy materials, such as simple rubber, and seeding them with magnetic microparticles. They are assembled into a finished shape by applying magnetic fields. The results look like flowers or geometric shapes made from Tinkertoy ball and stick modelling kits. They’re guided through tubes of fluid using magnets, and can even stop and cling to the sides of a tube.

CAT/2021.3(RC)

Question. 24

Which one of the following statements best summarises the central point of the passage?

Comprehension

The passage below is accompanied by a set of questions. Choose the best answer to each question.

Keeping time accurately comes with a price. The maximum accuracy of a clock is directly related to how much disorder, or entropy, it creates every time it ticks. Natalia Ares at the University of Oxford and her colleagues made this discovery using a tiny clock with an accuracy that can be controlled. The clock consists of a 50-nanometre-thick membrane of silicon nitride, vibrated by an electric current. Each time the membrane moved up and down once and then returned to its original position, the researchers counted a tick, and the regularity of the spacing between the ticks represented the accuracy of the clock. The researchers found that as they increased the clock’s accuracy, the heat produced in the system grew, increasing the entropy of its surroundings by jostling nearby particles . . . “If a clock is more accurate, you are paying for it somehow,” says Ares. In this case, you pay for it by pouring more ordered energy into the clock, which is then converted into entropy. “By measuring time, we are increasing the entropy of the universe,” says Ares. The more entropy there is in the universe, the closer it may be to its eventual demise. “Maybe we should stop measuring time,” says Ares. The scale of the additional entropy is so small, though, that there is no need to worry about its effects, she says.

The increase in entropy in timekeeping may be related to the “arrow of time”, says Marcus Huber at the Austrian Academy of Sciences in Vienna, who was part of the research team. It has been suggested that the reason that time only flows forward, not in reverse, is that the total amount of entropy in the universe is constantly increasing, creating disorder that cannot be put in order again.

The relationship that the researchers found is a limit on the accuracy of a clock, so it doesn’t mean that a clock that creates the most possible entropy would be maximally accurate – hence a large, inefficient grandfather clock isn’t more precise than an atomic clock. “It’s a bit like fuel use in a car. Just because I’m using more fuel doesn’t mean that I’m going faster or further,” says Huber.

When the researchers compared their results with theoretical models developed for clocks that rely on quantum effects, they were surprised to find that the relationship between accuracy and entropy seemed to be the same for both. . . . We can’t be sure yet that these results are actually universal, though, because there are many types of clocks for which the relationship between accuracy and entropy haven’t been tested. “It’s still unclear how this principle plays out in real devices such as atomic clocks, which push the ultimate quantum limits of accuracy,” says Mark Mitchison at Trinity College Dublin in Ireland. Understanding this relationship could be helpful for designing clocks in the future, particularly those used in quantum computers and other devices where both accuracy and temperature are crucial, says Ares. This finding could also help us understand more generally how the quantum world and the classical world are similar and different in terms of thermodynamics and the passage of time.

CAT/2021.3(RC)

Question. 25

Which one of the following sets of words and phrases serves best as keywords of the passage?

Comprehension

The passage below is accompanied by a set of questions. Choose the best answer to each question.

Keeping time accurately comes with a price. The maximum accuracy of a clock is directly related to how much disorder, or entropy, it creates every time it ticks. Natalia Ares at the University of Oxford and her colleagues made this discovery using a tiny clock with an accuracy that can be controlled. The clock consists of a 50-nanometre-thick membrane of silicon nitride, vibrated by an electric current. Each time the membrane moved up and down once and then returned to its original position, the researchers counted a tick, and the regularity of the spacing between the ticks represented the accuracy of the clock. The researchers found that as they increased the clock’s accuracy, the heat produced in the system grew, increasing the entropy of its surroundings by jostling nearby particles . . . “If a clock is more accurate, you are paying for it somehow,” says Ares. In this case, you pay for it by pouring more ordered energy into the clock, which is then converted into entropy. “By measuring time, we are increasing the entropy of the universe,” says Ares. The more entropy there is in the universe, the closer it may be to its eventual demise. “Maybe we should stop measuring time,” says Ares. The scale of the additional entropy is so small, though, that there is no need to worry about its effects, she says.

The increase in entropy in timekeeping may be related to the “arrow of time”, says Marcus Huber at the Austrian Academy of Sciences in Vienna, who was part of the research team. It has been suggested that the reason that time only flows forward, not in reverse, is that the total amount of entropy in the universe is constantly increasing, creating disorder that cannot be put in order again.

The relationship that the researchers found is a limit on the accuracy of a clock, so it doesn’t mean that a clock that creates the most possible entropy would be maximally accurate – hence a large, inefficient grandfather clock isn’t more precise than an atomic clock. “It’s a bit like fuel use in a car. Just because I’m using more fuel doesn’t mean that I’m going faster or further,” says Huber.

When the researchers compared their results with theoretical models developed for clocks that rely on quantum effects, they were surprised to find that the relationship between accuracy and entropy seemed to be the same for both. . . . We can’t be sure yet that these results are actually universal, though, because there are many types of clocks for which the relationship between accuracy and entropy haven’t been tested. “It’s still unclear how this principle plays out in real devices such as atomic clocks, which push the ultimate quantum limits of accuracy,” says Mark Mitchison at Trinity College Dublin in Ireland. Understanding this relationship could be helpful for designing clocks in the future, particularly those used in quantum computers and other devices where both accuracy and temperature are crucial, says Ares. This finding could also help us understand more generally how the quantum world and the classical world are similar and different in terms of thermodynamics and the passage of time.

CAT/2021.3(RC)

Question. 26

“It’s a bit like fuel use in a car. Just because I’m using more fuel doesn’t mean that I’m going faster or further . . .” What is the purpose of this example?

Comprehension

The passage below is accompanied by a set of questions. Choose the best answer to each question.

Keeping time accurately comes with a price. The maximum accuracy of a clock is directly related to how much disorder, or entropy, it creates every time it ticks. Natalia Ares at the University of Oxford and her colleagues made this discovery using a tiny clock with an accuracy that can be controlled. The clock consists of a 50-nanometre-thick membrane of silicon nitride, vibrated by an electric current. Each time the membrane moved up and down once and then returned to its original position, the researchers counted a tick, and the regularity of the spacing between the ticks represented the accuracy of the clock. The researchers found that as they increased the clock’s accuracy, the heat produced in the system grew, increasing the entropy of its surroundings by jostling nearby particles . . . “If a clock is more accurate, you are paying for it somehow,” says Ares. In this case, you pay for it by pouring more ordered energy into the clock, which is then converted into entropy. “By measuring time, we are increasing the entropy of the universe,” says Ares. The more entropy there is in the universe, the closer it may be to its eventual demise. “Maybe we should stop measuring time,” says Ares. The scale of the additional entropy is so small, though, that there is no need to worry about its effects, she says.

The increase in entropy in timekeeping may be related to the “arrow of time”, says Marcus Huber at the Austrian Academy of Sciences in Vienna, who was part of the research team. It has been suggested that the reason that time only flows forward, not in reverse, is that the total amount of entropy in the universe is constantly increasing, creating disorder that cannot be put in order again.

The relationship that the researchers found is a limit on the accuracy of a clock, so it doesn’t mean that a clock that creates the most possible entropy would be maximally accurate – hence a large, inefficient grandfather clock isn’t more precise than an atomic clock. “It’s a bit like fuel use in a car. Just because I’m using more fuel doesn’t mean that I’m going faster or further,” says Huber.

When the researchers compared their results with theoretical models developed for clocks that rely on quantum effects, they were surprised to find that the relationship between accuracy and entropy seemed to be the same for both. . . . We can’t be sure yet that these results are actually universal, though, because there are many types of clocks for which the relationship between accuracy and entropy haven’t been tested. “It’s still unclear how this principle plays out in real devices such as atomic clocks, which push the ultimate quantum limits of accuracy,” says Mark Mitchison at Trinity College Dublin in Ireland. Understanding this relationship could be helpful for designing clocks in the future, particularly those used in quantum computers and other devices where both accuracy and temperature are crucial, says Ares. This finding could also help us understand more generally how the quantum world and the classical world are similar and different in terms of thermodynamics and the passage of time.

CAT/2021.3(RC)

Question. 27

The author makes all of the following arguments in the passage, EXCEPT that:

Comprehension

The passage below is accompanied by a set of questions. Choose the best answer to each question.

Keeping time accurately comes with a price. The maximum accuracy of a clock is directly related to how much disorder, or entropy, it creates every time it ticks. Natalia Ares at the University of Oxford and her colleagues made this discovery using a tiny clock with an accuracy that can be controlled. The clock consists of a 50-nanometre-thick membrane of silicon nitride, vibrated by an electric current. Each time the membrane moved up and down once and then returned to its original position, the researchers counted a tick, and the regularity of the spacing between the ticks represented the accuracy of the clock. The researchers found that as they increased the clock’s accuracy, the heat produced in the system grew, increasing the entropy of its surroundings by jostling nearby particles . . . “If a clock is more accurate, you are paying for it somehow,” says Ares. In this case, you pay for it by pouring more ordered energy into the clock, which is then converted into entropy. “By measuring time, we are increasing the entropy of the universe,” says Ares. The more entropy there is in the universe, the closer it may be to its eventual demise. “Maybe we should stop measuring time,” says Ares. The scale of the additional entropy is so small, though, that there is no need to worry about its effects, she says.

The increase in entropy in timekeeping may be related to the “arrow of time”, says Marcus Huber at the Austrian Academy of Sciences in Vienna, who was part of the research team. It has been suggested that the reason that time only flows forward, not in reverse, is that the total amount of entropy in the universe is constantly increasing, creating disorder that cannot be put in order again.

The relationship that the researchers found is a limit on the accuracy of a clock, so it doesn’t mean that a clock that creates the most possible entropy would be maximally accurate – hence a large, inefficient grandfather clock isn’t more precise than an atomic clock. “It’s a bit like fuel use in a car. Just because I’m using more fuel doesn’t mean that I’m going faster or further,” says Huber.

When the researchers compared their results with theoretical models developed for clocks that rely on quantum effects, they were surprised to find that the relationship between accuracy and entropy seemed to be the same for both. . . . We can’t be sure yet that these results are actually universal, though, because there are many types of clocks for which the relationship between accuracy and entropy haven’t been tested. “It’s still unclear how this principle plays out in real devices such as atomic clocks, which push the ultimate quantum limits of accuracy,” says Mark Mitchison at Trinity College Dublin in Ireland. Understanding this relationship could be helpful for designing clocks in the future, particularly those used in quantum computers and other devices where both accuracy and temperature are crucial, says Ares. This finding could also help us understand more generally how the quantum world and the classical world are similar and different in terms of thermodynamics and the passage of time.

CAT/2021.3(RC)

Question. 28

None of the following statements can be inferred from the passage EXCEPT that:

Comprehension

Directions for question: Read the passage carefully and answer the given questions accordingly

In the late 1960s, while studying the northern-elephant-seal population along the coasts of Mexico and California, Burney Le Boeuf and his colleagues couldn’t help but notice that the threat calls of males at some sites sounded different from those of males at other sites. . . . That was the first time dialects were documented in a nonhuman mammal. . . .

All the northern elephant seals that exist today are descendants of the small herd that survived on Isla Guadalupe [after the near extinction of the species in the nineteenth century]. As that tiny population grew, northern elephant seals started to recolonize former breeding locations. It was precisely on the more recently colonized islands where Le Boeuf found that the tempos of the male vocal displays showed stronger differences to the ones from Isla Guadalupe, the founder colony. 

In order to test the reliability of these dialects over time, Le Boeuf and other researchers visited Año Nuevo Island in California—the island where males showed the slowest pulse rates in their calls—every winter from 1968 to 1972. “What we found is that the pulse rate increased, but it still remained relatively slow compared to the other colonies we had measured in the past” Le Boeuf told me.

At the individual level, the pulse of the calls stayed the same: A male would maintain his vocal signature throughout his lifetime. But the average pulse rate was changing. Immigration could have been responsible for this increase, as in the early 1970s, 43 percent of the males on Año Nuevo had come from southern rookeries that had a faster pulse rate. This led Le Boeuf and his collaborator, Lewis Petrinovich, to deduce that the dialects were, perhaps, a result of isolation over time, after the breeding sites had been recolonized. For instance, the first settlers of Año Nuevo could have had, by chance, calls with low pulse rates. At other sites, where the scientists found faster pulse rates, the opposite would have happened—seals with faster rates would have happened to arrive first.

As the population continued to expand and the islands kept on receiving immigrants from the original population, the calls in all locations would have eventually regressed to the average pulse rate of the founder colony. In the decades that followed, scientists noticed that the geographical variations reported in 1969 were not obvious anymore. . . . In the early 2010s, while studying northern elephant seals on Año Nuevo Island, [researcher Caroline] Casey noticed, too, that what Le Boeuf had heard decades ago was not what she heard now. . . . By performing more sophisticated statistical analyses on both sets of data, [Casey and Le Boeuf] confirmed that dialects existed back then but had vanished. Yet there are other differences between the males from the late 1960s and their great-great-grandsons: Modern males exhibit more individual diversity, and their calls are more complex. While 50 years ago the drumming pattern was quite simple and the dialects denoted just a change in tempo, Casey explained, the calls recorded today have more complex structures, sometimes featuring doublets or triplets. . . .

CAT/2020.1(RC)

Question. 29

Which one of the following conditions, if true, could have ensured that male northern elephant seal dialects did not disappear?

Comprehension

Directions for question: Read the passage carefully and answer the given questions accordingly

In the late 1960s, while studying the northern-elephant-seal population along the coasts of Mexico and California, Burney Le Boeuf and his colleagues couldn’t help but notice that the threat calls of males at some sites sounded different from those of males at other sites. . . . That was the first time dialects were documented in a nonhuman mammal. . . .

All the northern elephant seals that exist today are descendants of the small herd that survived on Isla Guadalupe [after the near extinction of the species in the nineteenth century]. As that tiny population grew, northern elephant seals started to recolonize former breeding locations. It was precisely on the more recently colonized islands where Le Boeuf found that the tempos of the male vocal displays showed stronger differences to the ones from Isla Guadalupe, the founder colony. 

In order to test the reliability of these dialects over time, Le Boeuf and other researchers visited Año Nuevo Island in California—the island where males showed the slowest pulse rates in their calls—every winter from 1968 to 1972. “What we found is that the pulse rate increased, but it still remained relatively slow compared to the other colonies we had measured in the past” Le Boeuf told me.

At the individual level, the pulse of the calls stayed the same: A male would maintain his vocal signature throughout his lifetime. But the average pulse rate was changing. Immigration could have been responsible for this increase, as in the early 1970s, 43 percent of the males on Año Nuevo had come from southern rookeries that had a faster pulse rate. This led Le Boeuf and his collaborator, Lewis Petrinovich, to deduce that the dialects were, perhaps, a result of isolation over time, after the breeding sites had been recolonized. For instance, the first settlers of Año Nuevo could have had, by chance, calls with low pulse rates. At other sites, where the scientists found faster pulse rates, the opposite would have happened—seals with faster rates would have happened to arrive first.

As the population continued to expand and the islands kept on receiving immigrants from the original population, the calls in all locations would have eventually regressed to the average pulse rate of the founder colony. In the decades that followed, scientists noticed that the geographical variations reported in 1969 were not obvious anymore. . . . In the early 2010s, while studying northern elephant seals on Año Nuevo Island, [researcher Caroline] Casey noticed, too, that what Le Boeuf had heard decades ago was not what she heard now. . . . By performing more sophisticated statistical analyses on both sets of data, [Casey and Le Boeuf] confirmed that dialects existed back then but had vanished. Yet there are other differences between the males from the late 1960s and their great-great-grandsons: Modern males exhibit more individual diversity, and their calls are more complex. While 50 years ago the drumming pattern was quite simple and the dialects denoted just a change in tempo, Casey explained, the calls recorded today have more complex structures, sometimes featuring doublets or triplets. . . .

CAT/2020.1(RC)

Question. 30

All of the following can be inferred from Le Boeuf’s study as described in the passage EXCEPT that:

Comprehension

Directions for question: Read the passage carefully and answer the given questions accordingly

In the late 1960s, while studying the northern-elephant-seal population along the coasts of Mexico and California, Burney Le Boeuf and his colleagues couldn’t help but notice that the threat calls of males at some sites sounded different from those of males at other sites. . . . That was the first time dialects were documented in a nonhuman mammal. . . .

All the northern elephant seals that exist today are descendants of the small herd that survived on Isla Guadalupe [after the near extinction of the species in the nineteenth century]. As that tiny population grew, northern elephant seals started to recolonize former breeding locations. It was precisely on the more recently colonized islands where Le Boeuf found that the tempos of the male vocal displays showed stronger differences to the ones from Isla Guadalupe, the founder colony. 

In order to test the reliability of these dialects over time, Le Boeuf and other researchers visited Año Nuevo Island in California—the island where males showed the slowest pulse rates in their calls—every winter from 1968 to 1972. “What we found is that the pulse rate increased, but it still remained relatively slow compared to the other colonies we had measured in the past” Le Boeuf told me.

At the individual level, the pulse of the calls stayed the same: A male would maintain his vocal signature throughout his lifetime. But the average pulse rate was changing. Immigration could have been responsible for this increase, as in the early 1970s, 43 percent of the males on Año Nuevo had come from southern rookeries that had a faster pulse rate. This led Le Boeuf and his collaborator, Lewis Petrinovich, to deduce that the dialects were, perhaps, a result of isolation over time, after the breeding sites had been recolonized. For instance, the first settlers of Año Nuevo could have had, by chance, calls with low pulse rates. At other sites, where the scientists found faster pulse rates, the opposite would have happened—seals with faster rates would have happened to arrive first.

As the population continued to expand and the islands kept on receiving immigrants from the original population, the calls in all locations would have eventually regressed to the average pulse rate of the founder colony. In the decades that followed, scientists noticed that the geographical variations reported in 1969 were not obvious anymore. . . . In the early 2010s, while studying northern elephant seals on Año Nuevo Island, [researcher Caroline] Casey noticed, too, that what Le Boeuf had heard decades ago was not what she heard now. . . . By performing more sophisticated statistical analyses on both sets of data, [Casey and Le Boeuf] confirmed that dialects existed back then but had vanished. Yet there are other differences between the males from the late 1960s and their great-great-grandsons: Modern males exhibit more individual diversity, and their calls are more complex. While 50 years ago the drumming pattern was quite simple and the dialects denoted just a change in tempo, Casey explained, the calls recorded today have more complex structures, sometimes featuring doublets or triplets. . . .

CAT/2020.1(RC)

Question. 31

Which one of the following best sums up the overall history of transformation of male northern elephant seal calls?

Comprehension

Directions for question: Read the passage carefully and answer the given questions accordingly

In the late 1960s, while studying the northern-elephant-seal population along the coasts of Mexico and California, Burney Le Boeuf and his colleagues couldn’t help but notice that the threat calls of males at some sites sounded different from those of males at other sites. . . . That was the first time dialects were documented in a nonhuman mammal. . . .

All the northern elephant seals that exist today are descendants of the small herd that survived on Isla Guadalupe [after the near extinction of the species in the nineteenth century]. As that tiny population grew, northern elephant seals started to recolonize former breeding locations. It was precisely on the more recently colonized islands where Le Boeuf found that the tempos of the male vocal displays showed stronger differences to the ones from Isla Guadalupe, the founder colony. 

In order to test the reliability of these dialects over time, Le Boeuf and other researchers visited Año Nuevo Island in California—the island where males showed the slowest pulse rates in their calls—every winter from 1968 to 1972. “What we found is that the pulse rate increased, but it still remained relatively slow compared to the other colonies we had measured in the past” Le Boeuf told me.

At the individual level, the pulse of the calls stayed the same: A male would maintain his vocal signature throughout his lifetime. But the average pulse rate was changing. Immigration could have been responsible for this increase, as in the early 1970s, 43 percent of the males on Año Nuevo had come from southern rookeries that had a faster pulse rate. This led Le Boeuf and his collaborator, Lewis Petrinovich, to deduce that the dialects were, perhaps, a result of isolation over time, after the breeding sites had been recolonized. For instance, the first settlers of Año Nuevo could have had, by chance, calls with low pulse rates. At other sites, where the scientists found faster pulse rates, the opposite would have happened—seals with faster rates would have happened to arrive first.

As the population continued to expand and the islands kept on receiving immigrants from the original population, the calls in all locations would have eventually regressed to the average pulse rate of the founder colony. In the decades that followed, scientists noticed that the geographical variations reported in 1969 were not obvious anymore. . . . In the early 2010s, while studying northern elephant seals on Año Nuevo Island, [researcher Caroline] Casey noticed, too, that what Le Boeuf had heard decades ago was not what she heard now. . . . By performing more sophisticated statistical analyses on both sets of data, [Casey and Le Boeuf] confirmed that dialects existed back then but had vanished. Yet there are other differences between the males from the late 1960s and their great-great-grandsons: Modern males exhibit more individual diversity, and their calls are more complex. While 50 years ago the drumming pattern was quite simple and the dialects denoted just a change in tempo, Casey explained, the calls recorded today have more complex structures, sometimes featuring doublets or triplets. . . .

CAT/2020.1(RC)

Question. 32

From the passage, it can be inferred that the call pulse rate of male northern elephant seals in the southern rookeries was faster because:

Comprehension

Directions for question: Read the passage carefully and answer the given questions accordingly

In a low-carbon world, renewable energy technologies are hot business. For investors looking to redirect funds, wind turbines and solar panels, among other technologies, seem a straightforward choice. But renewables need to be further scrutinized before being championed as forging a path toward a low-carbon future. Both the direct and indirect impacts of renewable energy must be examined to ensure that a climate-smart future does not intensify social and environmental harm. As renewable energy production requires land, water, and labor, among other inputs, it imposes costs on people and the environment. Hydropower projects, for instance, have led to community dispossession and exclusion . . . Renewable energy supply chains are also intertwined with mining, and their technologies contribute to growing levels of electronic waste . . . Furthermore, although renewable energy can be produced and distributed through small-scale, local systems, such an approach might not generate the high returns on investment needed to attract capital.

Although an emerging sector, renewables are enmeshed in long-standing resource extraction through their dependence on minerals and metals . . . Scholars document the negative consequences of mining . . . even for mining operations that commit to socially responsible practices[:] “many of the world’s largest reservoirs of minerals like cobalt, copper, lithium, [and] rare earth minerals”—the ones needed for renewable technologies—“are found in fragile states and under communities of marginalized peoples in Africa, Asia, and Latin America.” Since the demand for metals and minerals will increase substantially in a renewable-powered future . . . this intensification could exacerbate the existing consequences of extractive activities.

Among the connections between climate change and waste, O’Neill . . . highlights that “devices developed to reduce our carbon footprint, such as lithium batteries for hybrid and electric cars or solar panels[,] become potentially dangerous electronic waste at the end of their productive life.” The disposal of toxic waste has long perpetuated social injustice through the flows of waste to the Global South and to marginalized communities in the Global North . . .

While renewable energy is a more recent addition to financial portfolios, investments in the sector must be considered in light of our understanding of capital accumulation. As agricultural finance reveals, the concentration of control of corporate activity facilitates profit generation. For some climate activists, the promise of renewables rests on their ability not only to reduce emissions but also to provide distributed, democratized access to energy . . . But Burke and Stephens . . . caution that “renewable energy systems offer a possibility but not a certainty for more democratic energy futures.” Small-scale, distributed forms of energy are only highly profitable to institutional investors if control is consolidated somewhere in the financial chain. Renewable energy can be produced at the household or neighborhood level. However, such small-scale, localized production is unlikely to generate high returns for investors. For financial growth to be sustained and expanded by the renewable sector, production and trade in renewable energy technologies will need to be highly concentrated, and large asset management firms will likely drive those developments.

CAT/2020.2(RC)

Question. 33

All of the following statements, if true, could be seen as supporting the arguments in the passage, EXCEPT:

Comprehension

Directions for question: Read the passage carefully and answer the given questions accordingly

In a low-carbon world, renewable energy technologies are hot business. For investors looking to redirect funds, wind turbines and solar panels, among other technologies, seem a straightforward choice. But renewables need to be further scrutinized before being championed as forging a path toward a low-carbon future. Both the direct and indirect impacts of renewable energy must be examined to ensure that a climate-smart future does not intensify social and environmental harm. As renewable energy production requires land, water, and labor, among other inputs, it imposes costs on people and the environment. Hydropower projects, for instance, have led to community dispossession and exclusion . . . Renewable energy supply chains are also intertwined with mining, and their technologies contribute to growing levels of electronic waste . . . Furthermore, although renewable energy can be produced and distributed through small-scale, local systems, such an approach might not generate the high returns on investment needed to attract capital.

Although an emerging sector, renewables are enmeshed in long-standing resource extraction through their dependence on minerals and metals . . . Scholars document the negative consequences of mining . . . even for mining operations that commit to socially responsible practices[:] “many of the world’s largest reservoirs of minerals like cobalt, copper, lithium, [and] rare earth minerals”—the ones needed for renewable technologies—“are found in fragile states and under communities of marginalized peoples in Africa, Asia, and Latin America.” Since the demand for metals and minerals will increase substantially in a renewable-powered future . . . this intensification could exacerbate the existing consequences of extractive activities.

Among the connections between climate change and waste, O’Neill . . . highlights that “devices developed to reduce our carbon footprint, such as lithium batteries for hybrid and electric cars or solar panels[,] become potentially dangerous electronic waste at the end of their productive life.” The disposal of toxic waste has long perpetuated social injustice through the flows of waste to the Global South and to marginalized communities in the Global North . . .

While renewable energy is a more recent addition to financial portfolios, investments in the sector must be considered in light of our understanding of capital accumulation. As agricultural finance reveals, the concentration of control of corporate activity facilitates profit generation. For some climate activists, the promise of renewables rests on their ability not only to reduce emissions but also to provide distributed, democratized access to energy . . . But Burke and Stephens . . . caution that “renewable energy systems offer a possibility but not a certainty for more democratic energy futures.” Small-scale, distributed forms of energy are only highly profitable to institutional investors if control is consolidated somewhere in the financial chain. Renewable energy can be produced at the household or neighborhood level. However, such small-scale, localized production is unlikely to generate high returns for investors. For financial growth to be sustained and expanded by the renewable sector, production and trade in renewable energy technologies will need to be highly concentrated, and large asset management firms will likely drive those developments.

CAT/2020.2(RC)

Question. 34

Which one of the following statements, if false, could be seen as best supporting the arguments in the passage?

Comprehension

Directions for question: Read the passage carefully and answer the given questions accordingly

In a low-carbon world, renewable energy technologies are hot business. For investors looking to redirect funds, wind turbines and solar panels, among other technologies, seem a straightforward choice. But renewables need to be further scrutinized before being championed as forging a path toward a low-carbon future. Both the direct and indirect impacts of renewable energy must be examined to ensure that a climate-smart future does not intensify social and environmental harm. As renewable energy production requires land, water, and labor, among other inputs, it imposes costs on people and the environment. Hydropower projects, for instance, have led to community dispossession and exclusion . . . Renewable energy supply chains are also intertwined with mining, and their technologies contribute to growing levels of electronic waste . . . Furthermore, although renewable energy can be produced and distributed through small-scale, local systems, such an approach might not generate the high returns on investment needed to attract capital.

Although an emerging sector, renewables are enmeshed in long-standing resource extraction through their dependence on minerals and metals . . . Scholars document the negative consequences of mining . . . even for mining operations that commit to socially responsible practices[:] “many of the world’s largest reservoirs of minerals like cobalt, copper, lithium, [and] rare earth minerals”—the ones needed for renewable technologies—“are found in fragile states and under communities of marginalized peoples in Africa, Asia, and Latin America.” Since the demand for metals and minerals will increase substantially in a renewable-powered future . . . this intensification could exacerbate the existing consequences of extractive activities.

Among the connections between climate change and waste, O’Neill . . . highlights that “devices developed to reduce our carbon footprint, such as lithium batteries for hybrid and electric cars or solar panels[,] become potentially dangerous electronic waste at the end of their productive life.” The disposal of toxic waste has long perpetuated social injustice through the flows of waste to the Global South and to marginalized communities in the Global North . . .

While renewable energy is a more recent addition to financial portfolios, investments in the sector must be considered in light of our understanding of capital accumulation. As agricultural finance reveals, the concentration of control of corporate activity facilitates profit generation. For some climate activists, the promise of renewables rests on their ability not only to reduce emissions but also to provide distributed, democratized access to energy . . . But Burke and Stephens . . . caution that “renewable energy systems offer a possibility but not a certainty for more democratic energy futures.” Small-scale, distributed forms of energy are only highly profitable to institutional investors if control is consolidated somewhere in the financial chain. Renewable energy can be produced at the household or neighborhood level. However, such small-scale, localized production is unlikely to generate high returns for investors. For financial growth to be sustained and expanded by the renewable sector, production and trade in renewable energy technologies will need to be highly concentrated, and large asset management firms will likely drive those developments.

CAT/2020.2(RC)

Question. 35

Which one of the following statements, if true, could be an accurate inference from the first paragraph of the passage?

Comprehension

Directions for question: Read the passage carefully and answer the given questions accordingly

In a low-carbon world, renewable energy technologies are hot business. For investors looking to redirect funds, wind turbines and solar panels, among other technologies, seem a straightforward choice. But renewables need to be further scrutinized before being championed as forging a path toward a low-carbon future. Both the direct and indirect impacts of renewable energy must be examined to ensure that a climate-smart future does not intensify social and environmental harm. As renewable energy production requires land, water, and labor, among other inputs, it imposes costs on people and the environment. Hydropower projects, for instance, have led to community dispossession and exclusion . . . Renewable energy supply chains are also intertwined with mining, and their technologies contribute to growing levels of electronic waste . . . Furthermore, although renewable energy can be produced and distributed through small-scale, local systems, such an approach might not generate the high returns on investment needed to attract capital.

Although an emerging sector, renewables are enmeshed in long-standing resource extraction through their dependence on minerals and metals . . . Scholars document the negative consequences of mining . . . even for mining operations that commit to socially responsible practices[:] “many of the world’s largest reservoirs of minerals like cobalt, copper, lithium, [and] rare earth minerals”—the ones needed for renewable technologies—“are found in fragile states and under communities of marginalized peoples in Africa, Asia, and Latin America.” Since the demand for metals and minerals will increase substantially in a renewable-powered future . . . this intensification could exacerbate the existing consequences of extractive activities.

Among the connections between climate change and waste, O’Neill . . . highlights that “devices developed to reduce our carbon footprint, such as lithium batteries for hybrid and electric cars or solar panels[,] become potentially dangerous electronic waste at the end of their productive life.” The disposal of toxic waste has long perpetuated social injustice through the flows of waste to the Global South and to marginalized communities in the Global North . . .

While renewable energy is a more recent addition to financial portfolios, investments in the sector must be considered in light of our understanding of capital accumulation. As agricultural finance reveals, the concentration of control of corporate activity facilitates profit generation. For some climate activists, the promise of renewables rests on their ability not only to reduce emissions but also to provide distributed, democratized access to energy . . . But Burke and Stephens . . . caution that “renewable energy systems offer a possibility but not a certainty for more democratic energy futures.” Small-scale, distributed forms of energy are only highly profitable to institutional investors if control is consolidated somewhere in the financial chain. Renewable energy can be produced at the household or neighborhood level. However, such small-scale, localized production is unlikely to generate high returns for investors. For financial growth to be sustained and expanded by the renewable sector, production and trade in renewable energy technologies will need to be highly concentrated, and large asset management firms will likely drive those developments.

CAT/2020.2(RC)

Question. 36

Which one of the following statements best captures the main argument of the last paragraph of the passage?

Comprehension

Directions for question: Read the passage carefully and answer the given questions accordingly

In a low-carbon world, renewable energy technologies are hot business. For investors looking to redirect funds, wind turbines and solar panels, among other technologies, seem a straightforward choice. But renewables need to be further scrutinized before being championed as forging a path toward a low-carbon future. Both the direct and indirect impacts of renewable energy must be examined to ensure that a climate-smart future does not intensify social and environmental harm. As renewable energy production requires land, water, and labor, among other inputs, it imposes costs on people and the environment. Hydropower projects, for instance, have led to community dispossession and exclusion . . . Renewable energy supply chains are also intertwined with mining, and their technologies contribute to growing levels of electronic waste . . . Furthermore, although renewable energy can be produced and distributed through small-scale, local systems, such an approach might not generate the high returns on investment needed to attract capital.

Although an emerging sector, renewables are enmeshed in long-standing resource extraction through their dependence on minerals and metals . . . Scholars document the negative consequences of mining . . . even for mining operations that commit to socially responsible practices[:] “many of the world’s largest reservoirs of minerals like cobalt, copper, lithium, [and] rare earth minerals”—the ones needed for renewable technologies—“are found in fragile states and under communities of marginalized peoples in Africa, Asia, and Latin America.” Since the demand for metals and minerals will increase substantially in a renewable-powered future . . . this intensification could exacerbate the existing consequences of extractive activities.

Among the connections between climate change and waste, O’Neill . . . highlights that “devices developed to reduce our carbon footprint, such as lithium batteries for hybrid and electric cars or solar panels[,] become potentially dangerous electronic waste at the end of their productive life.” The disposal of toxic waste has long perpetuated social injustice through the flows of waste to the Global South and to marginalized communities in the Global North . . .

While renewable energy is a more recent addition to financial portfolios, investments in the sector must be considered in light of our understanding of capital accumulation. As agricultural finance reveals, the concentration of control of corporate activity facilitates profit generation. For some climate activists, the promise of renewables rests on their ability not only to reduce emissions but also to provide distributed, democratized access to energy . . . But Burke and Stephens . . . caution that “renewable energy systems offer a possibility but not a certainty for more democratic energy futures.” Small-scale, distributed forms of energy are only highly profitable to institutional investors if control is consolidated somewhere in the financial chain. Renewable energy can be produced at the household or neighborhood level. However, such small-scale, localized production is unlikely to generate high returns for investors. For financial growth to be sustained and expanded by the renewable sector, production and trade in renewable energy technologies will need to be highly concentrated, and large asset management firms will likely drive those developments.

CAT/2020.2(RC)

Question. 37

Based on the passage, we can infer that the author would be most supportive of which one of the following practices?

Comprehension

Directions for question: Read the passage carefully and answer the given questions accordingly

Scientists recently discovered that Emperor Penguins—one of Antarctica’s most celebrated species—employ a particularly unusual technique for surviving the daily chill. As detailed in an article published today in the journal Biology Letters, the birds minimize heat loss by keeping the outer surface of their plumage below the temperature of the surrounding air. At the same time, the penguins’ thick plumage insulates their body and keeps it toasty. . . .

The researchers analyzed thermographic images . . . taken over roughly a month during June 2008. During that period, the average air temperature was 0.32 degrees Fahrenheit. At the same time, the majority of the plumage covering the penguins’ bodies was even colder: the surface of their warmest body part, their feet, was an average 1.76 degrees Fahrenheit, but the plumage on their heads, chests and backs were -1.84, -7.24 and -9.76 degrees Fahrenheit respectively. Overall, nearly the entire outer surface of the penguins’ bodies was below freezing at all times, except for their eyes and beaks. The scientists also used a computer simulation to determine how much heat was lost or gained from each part of the body—and discovered that by keeping their outer surface below air temperature, the birds might paradoxically be able to draw very slight amounts of heat from the air around them. The key to their trick is the difference between two different types of heat transfer: radiation and convection.

The penguins do lose internal body heat to the surrounding air through thermal radiation, just as our bodies do on a cold day. Because their bodies (but not surface plumage) are warmer than the surrounding air, heat gradually radiates outward over time, moving from a warmer material to a colder one. To maintain body temperature while losing heat, penguins, like all warm-blooded animals, rely on the metabolism of food. The penguins, though, have an additional strategy. Since their outer plumage is even colder than the air, the simulation showed that they might gain back a little of this heat through thermal convection—the transfer of heat via the movement of a fluid (in this case, the air). As the cold Antarctic air cycles around their bodies, slightly warmer air comes into contact with the plumage and donates minute amounts of heat back to the penguins, then cycles away at a slightly colder temperature.

Most of this heat, the researchers note, probably doesn’t make it all the way through the plumage and back to the penguins’ bodies, but it could make a slight difference. At the very least, the method by which a penguin’s plumage wicks heat from the bitterly cold air that surrounds it helps to cancel out some of the heat that’s radiating from its interior. And given the Emperors’ unusually demanding breeding cycle, every bit of warmth counts. . . . Since [penguins trek as far as 75 miles to the coast to breed and male penguins] don’t eat anything during [the incubation period of 64 days], conserving calories by giving up as little heat as possible is absolutely crucial.

CAT/2019.1(RC)

Question. 38

In the last sentence of paragraph 3, “slightly warmer air” and “at a slightly colder temperature” refer to ______ AND ______ respectively:

Comprehension

Directions for question: Read the passage carefully and answer the given questions accordingly

Scientists recently discovered that Emperor Penguins—one of Antarctica’s most celebrated species—employ a particularly unusual technique for surviving the daily chill. As detailed in an article published today in the journal Biology Letters, the birds minimize heat loss by keeping the outer surface of their plumage below the temperature of the surrounding air. At the same time, the penguins’ thick plumage insulates their body and keeps it toasty. . . .

The researchers analyzed thermographic images . . . taken over roughly a month during June 2008. During that period, the average air temperature was 0.32 degrees Fahrenheit. At the same time, the majority of the plumage covering the penguins’ bodies was even colder: the surface of their warmest body part, their feet, was an average 1.76 degrees Fahrenheit, but the plumage on their heads, chests and backs were -1.84, -7.24 and -9.76 degrees Fahrenheit respectively. Overall, nearly the entire outer surface of the penguins’ bodies was below freezing at all times, except for their eyes and beaks. The scientists also used a computer simulation to determine how much heat was lost or gained from each part of the body—and discovered that by keeping their outer surface below air temperature, the birds might paradoxically be able to draw very slight amounts of heat from the air around them. The key to their trick is the difference between two different types of heat transfer: radiation and convection.

The penguins do lose internal body heat to the surrounding air through thermal radiation, just as our bodies do on a cold day. Because their bodies (but not surface plumage) are warmer than the surrounding air, heat gradually radiates outward over time, moving from a warmer material to a colder one. To maintain body temperature while losing heat, penguins, like all warm-blooded animals, rely on the metabolism of food. The penguins, though, have an additional strategy. Since their outer plumage is even colder than the air, the simulation showed that they might gain back a little of this heat through thermal convection—the transfer of heat via the movement of a fluid (in this case, the air). As the cold Antarctic air cycles around their bodies, slightly warmer air comes into contact with the plumage and donates minute amounts of heat back to the penguins, then cycles away at a slightly colder temperature.

Most of this heat, the researchers note, probably doesn’t make it all the way through the plumage and back to the penguins’ bodies, but it could make a slight difference. At the very least, the method by which a penguin’s plumage wicks heat from the bitterly cold air that surrounds it helps to cancel out some of the heat that’s radiating from its interior. And given the Emperors’ unusually demanding breeding cycle, every bit of warmth counts. . . . Since [penguins trek as far as 75 miles to the coast to breed and male penguins] don’t eat anything during [the incubation period of 64 days], conserving calories by giving up as little heat as possible is absolutely crucial.

CAT/2019.1(RC)

Question. 39

Which of the following best explains the purpose of the word “paradoxically” as used by the author?

Comprehension

Directions for question: Read the passage carefully and answer the given questions accordingly

Scientists recently discovered that Emperor Penguins—one of Antarctica’s most celebrated species—employ a particularly unusual technique for surviving the daily chill. As detailed in an article published today in the journal Biology Letters, the birds minimize heat loss by keeping the outer surface of their plumage below the temperature of the surrounding air. At the same time, the penguins’ thick plumage insulates their body and keeps it toasty. . . .

The researchers analyzed thermographic images . . . taken over roughly a month during June 2008. During that period, the average air temperature was 0.32 degrees Fahrenheit. At the same time, the majority of the plumage covering the penguins’ bodies was even colder: the surface of their warmest body part, their feet, was an average 1.76 degrees Fahrenheit, but the plumage on their heads, chests and backs were -1.84, -7.24 and -9.76 degrees Fahrenheit respectively. Overall, nearly the entire outer surface of the penguins’ bodies was below freezing at all times, except for their eyes and beaks. The scientists also used a computer simulation to determine how much heat was lost or gained from each part of the body—and discovered that by keeping their outer surface below air temperature, the birds might paradoxically be able to draw very slight amounts of heat from the air around them. The key to their trick is the difference between two different types of heat transfer: radiation and convection.

The penguins do lose internal body heat to the surrounding air through thermal radiation, just as our bodies do on a cold day. Because their bodies (but not surface plumage) are warmer than the surrounding air, heat gradually radiates outward over time, moving from a warmer material to a colder one. To maintain body temperature while losing heat, penguins, like all warm-blooded animals, rely on the metabolism of food. The penguins, though, have an additional strategy. Since their outer plumage is even colder than the air, the simulation showed that they might gain back a little of this heat through thermal convection—the transfer of heat via the movement of a fluid (in this case, the air). As the cold Antarctic air cycles around their bodies, slightly warmer air comes into contact with the plumage and donates minute amounts of heat back to the penguins, then cycles away at a slightly colder temperature.

Most of this heat, the researchers note, probably doesn’t make it all the way through the plumage and back to the penguins’ bodies, but it could make a slight difference. At the very least, the method by which a penguin’s plumage wicks heat from the bitterly cold air that surrounds it helps to cancel out some of the heat that’s radiating from its interior. And given the Emperors’ unusually demanding breeding cycle, every bit of warmth counts. . . . Since [penguins trek as far as 75 miles to the coast to breed and male penguins] don’t eat anything during [the incubation period of 64 days], conserving calories by giving up as little heat as possible is absolutely crucial.

CAT/2019.1(RC)

Question. 40

All of the following, if true, would negate the findings of the study reported in the passage EXCEPT:

Comprehension

Directions for question: Read the passage carefully and answer the given questions accordingly

Scientists recently discovered that Emperor Penguins—one of Antarctica’s most celebrated species—employ a particularly unusual technique for surviving the daily chill. As detailed in an article published today in the journal Biology Letters, the birds minimize heat loss by keeping the outer surface of their plumage below the temperature of the surrounding air. At the same time, the penguins’ thick plumage insulates their body and keeps it toasty. . . .

The researchers analyzed thermographic images . . . taken over roughly a month during June 2008. During that period, the average air temperature was 0.32 degrees Fahrenheit. At the same time, the majority of the plumage covering the penguins’ bodies was even colder: the surface of their warmest body part, their feet, was an average 1.76 degrees Fahrenheit, but the plumage on their heads, chests and backs were -1.84, -7.24 and -9.76 degrees Fahrenheit respectively. Overall, nearly the entire outer surface of the penguins’ bodies was below freezing at all times, except for their eyes and beaks. The scientists also used a computer simulation to determine how much heat was lost or gained from each part of the body—and discovered that by keeping their outer surface below air temperature, the birds might paradoxically be able to draw very slight amounts of heat from the air around them. The key to their trick is the difference between two different types of heat transfer: radiation and convection.

The penguins do lose internal body heat to the surrounding air through thermal radiation, just as our bodies do on a cold day. Because their bodies (but not surface plumage) are warmer than the surrounding air, heat gradually radiates outward over time, moving from a warmer material to a colder one. To maintain body temperature while losing heat, penguins, like all warm-blooded animals, rely on the metabolism of food. The penguins, though, have an additional strategy. Since their outer plumage is even colder than the air, the simulation showed that they might gain back a little of this heat through thermal convection—the transfer of heat via the movement of a fluid (in this case, the air). As the cold Antarctic air cycles around their bodies, slightly warmer air comes into contact with the plumage and donates minute amounts of heat back to the penguins, then cycles away at a slightly colder temperature.

Most of this heat, the researchers note, probably doesn’t make it all the way through the plumage and back to the penguins’ bodies, but it could make a slight difference. At the very least, the method by which a penguin’s plumage wicks heat from the bitterly cold air that surrounds it helps to cancel out some of the heat that’s radiating from its interior. And given the Emperors’ unusually demanding breeding cycle, every bit of warmth counts. . . . Since [penguins trek as far as 75 miles to the coast to breed and male penguins] don’t eat anything during [the incubation period of 64 days], conserving calories by giving up as little heat as possible is absolutely crucial.

CAT/2019.1(RC)

Question. 41

Which of the following can be responsible for Emperor Penguins losing body heat?

Comprehension

Directions for question: Read the passage carefully and answer the given questions accordingly

The magic of squatter cities is that they are improved steadily and gradually by their residents. To a planner’s eye, these cities look chaotic. I trained as a biologist and to my eye, they look organic. Squatter cities are also unexpectedly green. They have maximum density—1 million people per square mile in some areas of Mumbai—and have minimum energy and material use. People get around by foot, bicycle, rickshaw, or the universal shared taxi.

Not everything is efficient in the slums, though. In the Brazilian favelas where electricity is stolen and therefore free, people leave their lights on all day. But in most slums recycling is literally a way of life. The Dharavi slum in Mumbai has 400 recycling units and 30,000 ragpickers. Six thousand tons of rubbish are sorted every day. In 2007, the Economist reported that in Vietnam and Mozambique, “Waves of gleaners sift the sweepings of Hanoi’s streets, just as Mozambiquan children pick over the rubbish of Maputo’s main tip. Every city in Asia and Latin America has an industry based on gathering up old cardboard boxes.” . . .

In his 1985 article, Calthorpe made a statement that still jars with most people: “The city is the most environmentally benign form of human settlement. Each city dweller consumes less land, less energy, less water, and produces less pollution than his counterpart in settlements of lower densities.” “Green Manhattan” was the inflammatory title of a 2004 New Yorker article by David Owen. “By the most significant measures,” he wrote, “New York is the greenest community in the United States, and one of the greenest cities in the world . . . The key to New York’s relative environmental benignity is its extreme compactness. . . . Placing one and a half million people on a twenty-three-square-mile island sharply reduces their opportunities to be wasteful.” He went on to note that this very compactness forces people to live in the world’s most energy-efficient apartment buildings. . . .

Urban density allows half of humanity to live on 2.8 per cent of the land. . . . Consider just the infrastructure efficiencies. According to a 2004 UN report: “The concentration of population and enterprises in urban areas greatly reduces the unit cost of piped water, sewers, drains, roads, electricity, garbage collection, transport, health care, and schools.” . . .

[T]he nationally subsidised city of Manaus in northern Brazil “answers the question” of how to stop deforestation: give people decent jobs. Then they can afford houses, and gain security. One hundred thousand people who would otherwise be deforesting the jungle around Manaus are now prospering in town making such things as mobile phones and televisions. . . .

Of course, fast-growing cities are far from an unmitigated good. They concentrate crime, pollution, disease and injustice as much as business, innovation, education and entertainment. . . . But if they are overall a net good for those who move there, it is because cities offer more than just jobs. They are transformative: in the slums, as well as the office towers and leafy suburbs, the progress is from hick to metropolitan to cosmopolitan . . .

CAT/2019.2(RC)

Question. 42

Which one of the following statements would undermine the author’s stand regarding the greenness of cities?

Comprehension

Directions for question: Read the passage carefully and answer the given questions accordingly

The magic of squatter cities is that they are improved steadily and gradually by their residents. To a planner’s eye, these cities look chaotic. I trained as a biologist and to my eye, they look organic. Squatter cities are also unexpectedly green. They have maximum density—1 million people per square mile in some areas of Mumbai—and have minimum energy and material use. People get around by foot, bicycle, rickshaw, or the universal shared taxi.

Not everything is efficient in the slums, though. In the Brazilian favelas where electricity is stolen and therefore free, people leave their lights on all day. But in most slums recycling is literally a way of life. The Dharavi slum in Mumbai has 400 recycling units and 30,000 ragpickers. Six thousand tons of rubbish are sorted every day. In 2007, the Economist reported that in Vietnam and Mozambique, “Waves of gleaners sift the sweepings of Hanoi’s streets, just as Mozambiquan children pick over the rubbish of Maputo’s main tip. Every city in Asia and Latin America has an industry based on gathering up old cardboard boxes.” . . .

In his 1985 article, Calthorpe made a statement that still jars with most people: “The city is the most environmentally benign form of human settlement. Each city dweller consumes less land, less energy, less water, and produces less pollution than his counterpart in settlements of lower densities.” “Green Manhattan” was the inflammatory title of a 2004 New Yorker article by David Owen. “By the most significant measures,” he wrote, “New York is the greenest community in the United States, and one of the greenest cities in the world . . . The key to New York’s relative environmental benignity is its extreme compactness. . . . Placing one and a half million people on a twenty-three-square-mile island sharply reduces their opportunities to be wasteful.” He went on to note that this very compactness forces people to live in the world’s most energy-efficient apartment buildings. . . .

Urban density allows half of humanity to live on 2.8 per cent of the land. . . . Consider just the infrastructure efficiencies. According to a 2004 UN report: “The concentration of population and enterprises in urban areas greatly reduces the unit cost of piped water, sewers, drains, roads, electricity, garbage collection, transport, health care, and schools.” . . .

[T]he nationally subsidised city of Manaus in northern Brazil “answers the question” of how to stop deforestation: give people decent jobs. Then they can afford houses, and gain security. One hundred thousand people who would otherwise be deforesting the jungle around Manaus are now prospering in town making such things as mobile phones and televisions. . . .

Of course, fast-growing cities are far from an unmitigated good. They concentrate crime, pollution, disease and injustice as much as business, innovation, education and entertainment. . . . But if they are overall a net good for those who move there, it is because cities offer more than just jobs. They are transformative: in the slums, as well as the office towers and leafy suburbs, the progress is from hick to metropolitan to cosmopolitan . . .

CAT/2019.2(RC)

Question. 43

According to the passage, squatter cities are environment-friendly for all of the following reasons EXCEPT:

Comprehension

Directions for question: Read the passage carefully and answer the given questions accordingly

The magic of squatter cities is that they are improved steadily and gradually by their residents. To a planner’s eye, these cities look chaotic. I trained as a biologist and to my eye, they look organic. Squatter cities are also unexpectedly green. They have maximum density—1 million people per square mile in some areas of Mumbai—and have minimum energy and material use. People get around by foot, bicycle, rickshaw, or the universal shared taxi.

Not everything is efficient in the slums, though. In the Brazilian favelas where electricity is stolen and therefore free, people leave their lights on all day. But in most slums recycling is literally a way of life. The Dharavi slum in Mumbai has 400 recycling units and 30,000 ragpickers. Six thousand tons of rubbish are sorted every day. In 2007, the Economist reported that in Vietnam and Mozambique, “Waves of gleaners sift the sweepings of Hanoi’s streets, just as Mozambiquan children pick over the rubbish of Maputo’s main tip. Every city in Asia and Latin America has an industry based on gathering up old cardboard boxes.” . . .

In his 1985 article, Calthorpe made a statement that still jars with most people: “The city is the most environmentally benign form of human settlement. Each city dweller consumes less land, less energy, less water, and produces less pollution than his counterpart in settlements of lower densities.” “Green Manhattan” was the inflammatory title of a 2004 New Yorker article by David Owen. “By the most significant measures,” he wrote, “New York is the greenest community in the United States, and one of the greenest cities in the world . . . The key to New York’s relative environmental benignity is its extreme compactness. . . . Placing one and a half million people on a twenty-three-square-mile island sharply reduces their opportunities to be wasteful.” He went on to note that this very compactness forces people to live in the world’s most energy-efficient apartment buildings. . . .

Urban density allows half of humanity to live on 2.8 per cent of the land. . . . Consider just the infrastructure efficiencies. According to a 2004 UN report: “The concentration of population and enterprises in urban areas greatly reduces the unit cost of piped water, sewers, drains, roads, electricity, garbage collection, transport, health care, and schools.” . . .

[T]he nationally subsidised city of Manaus in northern Brazil “answers the question” of how to stop deforestation: give people decent jobs. Then they can afford houses, and gain security. One hundred thousand people who would otherwise be deforesting the jungle around Manaus are now prospering in town making such things as mobile phones and televisions. . . .

Of course, fast-growing cities are far from an unmitigated good. They concentrate crime, pollution, disease and injustice as much as business, innovation, education and entertainment. . . . But if they are overall a net good for those who move there, it is because cities offer more than just jobs. They are transformative: in the slums, as well as the office towers and leafy suburbs, the progress is from hick to metropolitan to cosmopolitan . . .

CAT/2019.2(RC)

Question. 44

We can infer that Calthorpe’s statement “still jars” with most people because most people:

Comprehension

Directions for question: Read the passage carefully and answer the given questions accordingly

The magic of squatter cities is that they are improved steadily and gradually by their residents. To a planner’s eye, these cities look chaotic. I trained as a biologist and to my eye, they look organic. Squatter cities are also unexpectedly green. They have maximum density—1 million people per square mile in some areas of Mumbai—and have minimum energy and material use. People get around by foot, bicycle, rickshaw, or the universal shared taxi.

Not everything is efficient in the slums, though. In the Brazilian favelas where electricity is stolen and therefore free, people leave their lights on all day. But in most slums recycling is literally a way of life. The Dharavi slum in Mumbai has 400 recycling units and 30,000 ragpickers. Six thousand tons of rubbish are sorted every day. In 2007, the Economist reported that in Vietnam and Mozambique, “Waves of gleaners sift the sweepings of Hanoi’s streets, just as Mozambiquan children pick over the rubbish of Maputo’s main tip. Every city in Asia and Latin America has an industry based on gathering up old cardboard boxes.” . . .

In his 1985 article, Calthorpe made a statement that still jars with most people: “The city is the most environmentally benign form of human settlement. Each city dweller consumes less land, less energy, less water, and produces less pollution than his counterpart in settlements of lower densities.” “Green Manhattan” was the inflammatory title of a 2004 New Yorker article by David Owen. “By the most significant measures,” he wrote, “New York is the greenest community in the United States, and one of the greenest cities in the world . . . The key to New York’s relative environmental benignity is its extreme compactness. . . . Placing one and a half million people on a twenty-three-square-mile island sharply reduces their opportunities to be wasteful.” He went on to note that this very compactness forces people to live in the world’s most energy-efficient apartment buildings. . . .

Urban density allows half of humanity to live on 2.8 per cent of the land. . . . Consider just the infrastructure efficiencies. According to a 2004 UN report: “The concentration of population and enterprises in urban areas greatly reduces the unit cost of piped water, sewers, drains, roads, electricity, garbage collection, transport, health care, and schools.” . . .

[T]he nationally subsidised city of Manaus in northern Brazil “answers the question” of how to stop deforestation: give people decent jobs. Then they can afford houses, and gain security. One hundred thousand people who would otherwise be deforesting the jungle around Manaus are now prospering in town making such things as mobile phones and televisions. . . .

Of course, fast-growing cities are far from an unmitigated good. They concentrate crime, pollution, disease and injustice as much as business, innovation, education and entertainment. . . . But if they are overall a net good for those who move there, it is because cities offer more than just jobs. They are transformative: in the slums, as well as the office towers and leafy suburbs, the progress is from hick to metropolitan to cosmopolitan . . .

CAT/2019.2(RC)

Question. 45

In the context of the passage, the author refers to Manaus in order to:

Comprehension

Directions for question: Read the passage carefully and answer the given questions accordingly

The magic of squatter cities is that they are improved steadily and gradually by their residents. To a planner’s eye, these cities look chaotic. I trained as a biologist and to my eye, they look organic. Squatter cities are also unexpectedly green. They have maximum density—1 million people per square mile in some areas of Mumbai—and have minimum energy and material use. People get around by foot, bicycle, rickshaw, or the universal shared taxi.

Not everything is efficient in the slums, though. In the Brazilian favelas where electricity is stolen and therefore free, people leave their lights on all day. But in most slums recycling is literally a way of life. The Dharavi slum in Mumbai has 400 recycling units and 30,000 ragpickers. Six thousand tons of rubbish are sorted every day. In 2007, the Economist reported that in Vietnam and Mozambique, “Waves of gleaners sift the sweepings of Hanoi’s streets, just as Mozambiquan children pick over the rubbish of Maputo’s main tip. Every city in Asia and Latin America has an industry based on gathering up old cardboard boxes.” . . .

In his 1985 article, Calthorpe made a statement that still jars with most people: “The city is the most environmentally benign form of human settlement. Each city dweller consumes less land, less energy, less water, and produces less pollution than his counterpart in settlements of lower densities.” “Green Manhattan” was the inflammatory title of a 2004 New Yorker article by David Owen. “By the most significant measures,” he wrote, “New York is the greenest community in the United States, and one of the greenest cities in the world . . . The key to New York’s relative environmental benignity is its extreme compactness. . . . Placing one and a half million people on a twenty-three-square-mile island sharply reduces their opportunities to be wasteful.” He went on to note that this very compactness forces people to live in the world’s most energy-efficient apartment buildings. . . .

Urban density allows half of humanity to live on 2.8 per cent of the land. . . . Consider just the infrastructure efficiencies. According to a 2004 UN report: “The concentration of population and enterprises in urban areas greatly reduces the unit cost of piped water, sewers, drains, roads, electricity, garbage collection, transport, health care, and schools.” . . .

[T]he nationally subsidised city of Manaus in northern Brazil “answers the question” of how to stop deforestation: give people decent jobs. Then they can afford houses, and gain security. One hundred thousand people who would otherwise be deforesting the jungle around Manaus are now prospering in town making such things as mobile phones and televisions. . . .

Of course, fast-growing cities are far from an unmitigated good. They concentrate crime, pollution, disease and injustice as much as business, innovation, education and entertainment. . . . But if they are overall a net good for those who move there, it is because cities offer more than just jobs. They are transformative: in the slums, as well as the office towers and leafy suburbs, the progress is from hick to metropolitan to cosmopolitan . . .

CAT/2019.2(RC)

Question. 46

From the passage it can be inferred that cities are good places to live in for all of the following reasons EXCEPT that they:

Comprehension

Directions for question: Read the passage carefully and answer the given questions accordingly

The only thing worse than being lied to is not knowing you’re being lied to. It’s true that plastic pollution is a huge problem, of planetary proportions. And it’s true we could all do more to reduce our plastic footprint. The lie is that blame for the plastic problem is wasteful consumers and that changing our individual habits will fix it.

Recycling plastic is to saving the Earth what hammering a nail is to halting a falling skyscraper. You struggle to find a place to do it and feel pleased when you succeed. But your effort is wholly inadequate and distracts from the real problem of why the building is collapsing in the first place. The real problem is that single-use plastic—the very idea of producing plastic items like grocery bags, which we use for an average of 12 minutes but can persist in the environment for half a millennium—is an incredibly reckless abuse of technology. Encouraging individuals to recycle more will never solve the problem of a massive production of single-use plastic that should have been avoided in the first place.

As an ecologist and evolutionary biologist, I have had a disturbing window into the accumulating literature on the hazards of plastic pollution. Scientists have long recognized that plastics biodegrade slowly, if at all, and pose multiple threats to wildlife through entanglement and consumption. More recent reports highlight dangers posed by absorption of toxic chemicals in the water and by plastic odors that mimic some species’ natural food. Plastics also accumulate up the food chain, and studies now show that we are likely ingesting it ourselves in seafood. . . .

Beginning in the 1950s, big beverage companies like Coca-Cola and Anheuser-Busch, along with Phillip Morris and others, formed a non-profit called Keep America Beautiful. Its mission is/was to educate and encourage environmental stewardship in the public. . . . At face value, these efforts seem benevolent, but they obscure the real problem, which is the role that corporate polluters play in the plastic problem. This clever misdirection has led journalist and author Heather Rogers to describe Keep America Beautiful as the first corporate greenwashing front, as it has helped shift the public focus to consumer recycling behavior and actively thwarted legislation that would increase extended producer responsibility for waste management. . . . [T]he greatest success of Keep America Beautiful has been to shift the onus of environmental responsibility onto the public while simultaneously becoming a trusted name in the environmental movement. . . .

So what can we do to make responsible use of plastic a reality? First: reject the lie. Litterbugs are not responsible for the global ecological disaster of plastic. Humans can only function to the best of their abilities, given time, mental bandwidth and systemic constraints. Our huge problem with plastic is the result of a permissive legal framework that has allowed the uncontrolled rise of plastic pollution, despite clear evidence of the harm it causes to local communities and the world’s oceans. Recycling is also too hard in most parts of the U.S. and lacks the proper incentives to make it work well.

CAT/2018.1(RC)

Question. 47

Which of the following interventions would the author most strongly support:

Comprehension

Directions for question: Read the passage carefully and answer the given questions accordingly

The only thing worse than being lied to is not knowing you’re being lied to. It’s true that plastic pollution is a huge problem, of planetary proportions. And it’s true we could all do more to reduce our plastic footprint. The lie is that blame for the plastic problem is wasteful consumers and that changing our individual habits will fix it.

Recycling plastic is to saving the Earth what hammering a nail is to halting a falling skyscraper. You struggle to find a place to do it and feel pleased when you succeed. But your effort is wholly inadequate and distracts from the real problem of why the building is collapsing in the first place. The real problem is that single-use plastic—the very idea of producing plastic items like grocery bags, which we use for an average of 12 minutes but can persist in the environment for half a millennium—is an incredibly reckless abuse of technology. Encouraging individuals to recycle more will never solve the problem of a massive production of single-use plastic that should have been avoided in the first place.

As an ecologist and evolutionary biologist, I have had a disturbing window into the accumulating literature on the hazards of plastic pollution. Scientists have long recognized that plastics biodegrade slowly, if at all, and pose multiple threats to wildlife through entanglement and consumption. More recent reports highlight dangers posed by absorption of toxic chemicals in the water and by plastic odors that mimic some species’ natural food. Plastics also accumulate up the food chain, and studies now show that we are likely ingesting it ourselves in seafood. . . .

Beginning in the 1950s, big beverage companies like Coca-Cola and Anheuser-Busch, along with Phillip Morris and others, formed a non-profit called Keep America Beautiful. Its mission is/was to educate and encourage environmental stewardship in the public. . . . At face value, these efforts seem benevolent, but they obscure the real problem, which is the role that corporate polluters play in the plastic problem. This clever misdirection has led journalist and author Heather Rogers to describe Keep America Beautiful as the first corporate greenwashing front, as it has helped shift the public focus to consumer recycling behavior and actively thwarted legislation that would increase extended producer responsibility for waste management. . . . [T]he greatest success of Keep America Beautiful has been to shift the onus of environmental responsibility onto the public while simultaneously becoming a trusted name in the environmental movement. . . .

So what can we do to make responsible use of plastic a reality? First: reject the lie. Litterbugs are not responsible for the global ecological disaster of plastic. Humans can only function to the best of their abilities, given time, mental bandwidth and systemic constraints. Our huge problem with plastic is the result of a permissive legal framework that has allowed the uncontrolled rise of plastic pollution, despite clear evidence of the harm it causes to local communities and the world’s oceans. Recycling is also too hard in most parts of the U.S. and lacks the proper incentives to make it work well.

CAT/2018.1(RC)

Question. 48

The author lists all of the following as negative effects of the use of plastics EXCEPT the:

Comprehension

Directions for question: Read the passage carefully and answer the given questions accordingly

The only thing worse than being lied to is not knowing you’re being lied to. It’s true that plastic pollution is a huge problem, of planetary proportions. And it’s true we could all do more to reduce our plastic footprint. The lie is that blame for the plastic problem is wasteful consumers and that changing our individual habits will fix it.

Recycling plastic is to saving the Earth what hammering a nail is to halting a falling skyscraper. You struggle to find a place to do it and feel pleased when you succeed. But your effort is wholly inadequate and distracts from the real problem of why the building is collapsing in the first place. The real problem is that single-use plastic—the very idea of producing plastic items like grocery bags, which we use for an average of 12 minutes but can persist in the environment for half a millennium—is an incredibly reckless abuse of technology. Encouraging individuals to recycle more will never solve the problem of a massive production of single-use plastic that should have been avoided in the first place.

As an ecologist and evolutionary biologist, I have had a disturbing window into the accumulating literature on the hazards of plastic pollution. Scientists have long recognized that plastics biodegrade slowly, if at all, and pose multiple threats to wildlife through entanglement and consumption. More recent reports highlight dangers posed by absorption of toxic chemicals in the water and by plastic odors that mimic some species’ natural food. Plastics also accumulate up the food chain, and studies now show that we are likely ingesting it ourselves in seafood. . . .

Beginning in the 1950s, big beverage companies like Coca-Cola and Anheuser-Busch, along with Phillip Morris and others, formed a non-profit called Keep America Beautiful. Its mission is/was to educate and encourage environmental stewardship in the public. . . . At face value, these efforts seem benevolent, but they obscure the real problem, which is the role that corporate polluters play in the plastic problem. This clever misdirection has led journalist and author Heather Rogers to describe Keep America Beautiful as the first corporate greenwashing front, as it has helped shift the public focus to consumer recycling behavior and actively thwarted legislation that would increase extended producer responsibility for waste management. . . . [T]he greatest success of Keep America Beautiful has been to shift the onus of environmental responsibility onto the public while simultaneously becoming a trusted name in the environmental movement. . . .

So what can we do to make responsible use of plastic a reality? First: reject the lie. Litterbugs are not responsible for the global ecological disaster of plastic. Humans can only function to the best of their abilities, given time, mental bandwidth and systemic constraints. Our huge problem with plastic is the result of a permissive legal framework that has allowed the uncontrolled rise of plastic pollution, despite clear evidence of the harm it causes to local communities and the world’s oceans. Recycling is also too hard in most parts of the U.S. and lacks the proper incentives to make it work well.

CAT/2018.1(RC)

Question. 49

In the first paragraph, the author uses “lie” to refer to the:

Comprehension

Directions for question: Read the passage carefully and answer the given questions accordingly

The only thing worse than being lied to is not knowing you’re being lied to. It’s true that plastic pollution is a huge problem, of planetary proportions. And it’s true we could all do more to reduce our plastic footprint. The lie is that blame for the plastic problem is wasteful consumers and that changing our individual habits will fix it.

Recycling plastic is to saving the Earth what hammering a nail is to halting a falling skyscraper. You struggle to find a place to do it and feel pleased when you succeed. But your effort is wholly inadequate and distracts from the real problem of why the building is collapsing in the first place. The real problem is that single-use plastic—the very idea of producing plastic items like grocery bags, which we use for an average of 12 minutes but can persist in the environment for half a millennium—is an incredibly reckless abuse of technology. Encouraging individuals to recycle more will never solve the problem of a massive production of single-use plastic that should have been avoided in the first place.

As an ecologist and evolutionary biologist, I have had a disturbing window into the accumulating literature on the hazards of plastic pollution. Scientists have long recognized that plastics biodegrade slowly, if at all, and pose multiple threats to wildlife through entanglement and consumption. More recent reports highlight dangers posed by absorption of toxic chemicals in the water and by plastic odors that mimic some species’ natural food. Plastics also accumulate up the food chain, and studies now show that we are likely ingesting it ourselves in seafood. . . .

Beginning in the 1950s, big beverage companies like Coca-Cola and Anheuser-Busch, along with Phillip Morris and others, formed a non-profit called Keep America Beautiful. Its mission is/was to educate and encourage environmental stewardship in the public. . . . At face value, these efforts seem benevolent, but they obscure the real problem, which is the role that corporate polluters play in the plastic problem. This clever misdirection has led journalist and author Heather Rogers to describe Keep America Beautiful as the first corporate greenwashing front, as it has helped shift the public focus to consumer recycling behavior and actively thwarted legislation that would increase extended producer responsibility for waste management. . . . [T]he greatest success of Keep America Beautiful has been to shift the onus of environmental responsibility onto the public while simultaneously becoming a trusted name in the environmental movement. . . .

So what can we do to make responsible use of plastic a reality? First: reject the lie. Litterbugs are not responsible for the global ecological disaster of plastic. Humans can only function to the best of their abilities, given time, mental bandwidth and systemic constraints. Our huge problem with plastic is the result of a permissive legal framework that has allowed the uncontrolled rise of plastic pollution, despite clear evidence of the harm it causes to local communities and the world’s oceans. Recycling is also too hard in most parts of the U.S. and lacks the proper incentives to make it work well.

CAT/2018.1(RC)

Question. 50

In the second paragraph, the phrase “what hammering a nail is to halting a falling skyscraper” means:

Comprehension

Directions for question: Read the passage carefully and answer the given questions accordingly

The only thing worse than being lied to is not knowing you’re being lied to. It’s true that plastic pollution is a huge problem, of planetary proportions. And it’s true we could all do more to reduce our plastic footprint. The lie is that blame for the plastic problem is wasteful consumers and that changing our individual habits will fix it.

Recycling plastic is to saving the Earth what hammering a nail is to halting a falling skyscraper. You struggle to find a place to do it and feel pleased when you succeed. But your effort is wholly inadequate and distracts from the real problem of why the building is collapsing in the first place. The real problem is that single-use plastic—the very idea of producing plastic items like grocery bags, which we use for an average of 12 minutes but can persist in the environment for half a millennium—is an incredibly reckless abuse of technology. Encouraging individuals to recycle more will never solve the problem of a massive production of single-use plastic that should have been avoided in the first place.

As an ecologist and evolutionary biologist, I have had a disturbing window into the accumulating literature on the hazards of plastic pollution. Scientists have long recognized that plastics biodegrade slowly, if at all, and pose multiple threats to wildlife through entanglement and consumption. More recent reports highlight dangers posed by absorption of toxic chemicals in the water and by plastic odors that mimic some species’ natural food. Plastics also accumulate up the food chain, and studies now show that we are likely ingesting it ourselves in seafood. . . .

Beginning in the 1950s, big beverage companies like Coca-Cola and Anheuser-Busch, along with Phillip Morris and others, formed a non-profit called Keep America Beautiful. Its mission is/was to educate and encourage environmental stewardship in the public. . . . At face value, these efforts seem benevolent, but they obscure the real problem, which is the role that corporate polluters play in the plastic problem. This clever misdirection has led journalist and author Heather Rogers to describe Keep America Beautiful as the first corporate greenwashing front, as it has helped shift the public focus to consumer recycling behavior and actively thwarted legislation that would increase extended producer responsibility for waste management. . . . [T]he greatest success of Keep America Beautiful has been to shift the onus of environmental responsibility onto the public while simultaneously becoming a trusted name in the environmental movement. . . .

So what can we do to make responsible use of plastic a reality? First: reject the lie. Litterbugs are not responsible for the global ecological disaster of plastic. Humans can only function to the best of their abilities, given time, mental bandwidth and systemic constraints. Our huge problem with plastic is the result of a permissive legal framework that has allowed the uncontrolled rise of plastic pollution, despite clear evidence of the harm it causes to local communities and the world’s oceans. Recycling is also too hard in most parts of the U.S. and lacks the proper incentives to make it work well.

CAT/2018.1(RC)

Question. 51

It can be inferred that the author considers the Keep America Beautiful organisation:

Comprehension

Directions for question: Read the passage carefully and answer the given questions accordingly

When researchers at Emory University in Atlanta trained mice to fear the smell of almonds (by pairing it with electric shocks), they found, to their consternation, that both the children and grandchildren of these mice were spontaneously afraid of the same smell. That is not supposed to happen. Generations of schoolchildren have been taught that the inheritance of acquired characteristics is impossible. A mouse should not be born with something its parents have learned during their lifetimes, any more than a mouse that loses its tail in an accident should give birth to tailless mice.

Modern evolutionary biology dates back to a synthesis that emerged around the 1940s60s, which married Charles Darwin’s mechanism of natural selection with Gregor Mendel’s discoveries of how genes are inherited. The traditional, and still dominant, view is that adaptations – from the human brain to the peacock’s tail – are fully and satisfactorily explained by natural selection (and subsequent inheritance). Yet [new evidence] from genomics, epigenetics and developmental biology [indicates] that evolution is more complex than we once assumed.

In his book On Human Nature (1978), the evolutionary biologist Edward O Wilson claimed that human culture is held on a genetic leash. The metaphor [needs revision]. Imagine a dogwalker (the genes) struggling to retain control of a brawny mastiff (human culture). The pair’s trajectory (the pathway of evolution) reflects the outcome of the struggle. Now imagine the same dog-walker struggling with multiple dogs, on leashes of varied lengths, with each dog tugging in different directions. All these tugs represent the influence of developmental factors, including epigenetics, antibodies and hormones passed on by parents, as well as the ecological legacies and culture they bequeath.

The received wisdom is that parental experiences can’t affect the characters of their offspring. Except they do. The way that genes are expressed to produce an organism’s phenotype– the actual characteristics it ends up with – is affected by chemicals that attach to them. Everything from diet to air pollution to parental behaviour can influence the addition or removal of these chemical marks, which switches genes on or off. Usually these so-called ‘epigenetic’ attachments are removed during the production of sperm and eggs cells, but it turns out that some escape the resetting process and are passed on to the next generation, along with the genes. This is known as ‘epigenetic inheritance’, and more and more studies are confirming that it really happens. Let’s return to the almond-fearing mice. The inheritance of an epigenetic mark transmitted in the sperm is what led the mice’s offspring to acquire an inherited fear.

Epigenetics is only part of the story. Through culture and society, [humans and other animals] inherit knowledge and skills acquired by [their] parents. All this complexity points to an evolutionary process in which genomes (over hundreds to thousands of generations), epigenetic modifications and inherited cultural factors (over several, perhaps tens or hundreds of generations), and parental effects (over single-generation timespans) collectively informb how organisms adapt. These extra-genetic kinds of inheritance give organisms the flexibility to make rapid adjustments to environmental challenges, dragging genetic change in their wake – much like a rowdy pack of dogs.

CAT/2018.1(RC)

Question. 52

The passage uses the metaphor of a dog walker to argue that evolutionary adaptation is most comprehensively understood as being determined by:

Comprehension

Directions for question: Read the passage carefully and answer the given questions accordingly

When researchers at Emory University in Atlanta trained mice to fear the smell of almonds (by pairing it with electric shocks), they found, to their consternation, that both the children and grandchildren of these mice were spontaneously afraid of the same smell. That is not supposed to happen. Generations of schoolchildren have been taught that the inheritance of acquired characteristics is impossible. A mouse should not be born with something its parents have learned during their lifetimes, any more than a mouse that loses its tail in an accident should give birth to tailless mice.

Modern evolutionary biology dates back to a synthesis that emerged around the 1940s60s, which married Charles Darwin’s mechanism of natural selection with Gregor Mendel’s discoveries of how genes are inherited. The traditional, and still dominant, view is that adaptations – from the human brain to the peacock’s tail – are fully and satisfactorily explained by natural selection (and subsequent inheritance). Yet [new evidence] from genomics, epigenetics and developmental biology [indicates] that evolution is more complex than we once assumed.

In his book On Human Nature (1978), the evolutionary biologist Edward O Wilson claimed that human culture is held on a genetic leash. The metaphor [needs revision]. Imagine a dogwalker (the genes) struggling to retain control of a brawny mastiff (human culture). The pair’s trajectory (the pathway of evolution) reflects the outcome of the struggle. Now imagine the same dog-walker struggling with multiple dogs, on leashes of varied lengths, with each dog tugging in different directions. All these tugs represent the influence of developmental factors, including epigenetics, antibodies and hormones passed on by parents, as well as the ecological legacies and culture they bequeath.

The received wisdom is that parental experiences can’t affect the characters of their offspring. Except they do. The way that genes are expressed to produce an organism’s phenotype– the actual characteristics it ends up with – is affected by chemicals that attach to them. Everything from diet to air pollution to parental behaviour can influence the addition or removal of these chemical marks, which switches genes on or off. Usually these so-called ‘epigenetic’ attachments are removed during the production of sperm and eggs cells, but it turns out that some escape the resetting process and are passed on to the next generation, along with the genes. This is known as ‘epigenetic inheritance’, and more and more studies are confirming that it really happens. Let’s return to the almond-fearing mice. The inheritance of an epigenetic mark transmitted in the sperm is what led the mice’s offspring to acquire an inherited fear.

Epigenetics is only part of the story. Through culture and society, [humans and other animals] inherit knowledge and skills acquired by [their] parents. All this complexity points to an evolutionary process in which genomes (over hundreds to thousands of generations), epigenetic modifications and inherited cultural factors (over several, perhaps tens or hundreds of generations), and parental effects (over single-generation timespans) collectively informb how organisms adapt. These extra-genetic kinds of inheritance give organisms the flexibility to make rapid adjustments to environmental challenges, dragging genetic change in their wake – much like a rowdy pack of dogs.

CAT/2018.1(RC)

Question. 53

Which of the following options best describes the author's argument?

Comprehension

Directions for question: Read the passage carefully and answer the given questions accordingly

When researchers at Emory University in Atlanta trained mice to fear the smell of almonds (by pairing it with electric shocks), they found, to their consternation, that both the children and grandchildren of these mice were spontaneously afraid of the same smell. That is not supposed to happen. Generations of schoolchildren have been taught that the inheritance of acquired characteristics is impossible. A mouse should not be born with something its parents have learned during their lifetimes, any more than a mouse that loses its tail in an accident should give birth to tailless mice.

Modern evolutionary biology dates back to a synthesis that emerged around the 1940s60s, which married Charles Darwin’s mechanism of natural selection with Gregor Mendel’s discoveries of how genes are inherited. The traditional, and still dominant, view is that adaptations – from the human brain to the peacock’s tail – are fully and satisfactorily explained by natural selection (and subsequent inheritance). Yet [new evidence] from genomics, epigenetics and developmental biology [indicates] that evolution is more complex than we once assumed.

In his book On Human Nature (1978), the evolutionary biologist Edward O Wilson claimed that human culture is held on a genetic leash. The metaphor [needs revision]. Imagine a dogwalker (the genes) struggling to retain control of a brawny mastiff (human culture). The pair’s trajectory (the pathway of evolution) reflects the outcome of the struggle. Now imagine the same dog-walker struggling with multiple dogs, on leashes of varied lengths, with each dog tugging in different directions. All these tugs represent the influence of developmental factors, including epigenetics, antibodies and hormones passed on by parents, as well as the ecological legacies and culture they bequeath.

The received wisdom is that parental experiences can’t affect the characters of their offspring. Except they do. The way that genes are expressed to produce an organism’s phenotype– the actual characteristics it ends up with – is affected by chemicals that attach to them. Everything from diet to air pollution to parental behaviour can influence the addition or removal of these chemical marks, which switches genes on or off. Usually these so-called ‘epigenetic’ attachments are removed during the production of sperm and eggs cells, but it turns out that some escape the resetting process and are passed on to the next generation, along with the genes. This is known as ‘epigenetic inheritance’, and more and more studies are confirming that it really happens. Let’s return to the almond-fearing mice. The inheritance of an epigenetic mark transmitted in the sperm is what led the mice’s offspring to acquire an inherited fear.

Epigenetics is only part of the story. Through culture and society, [humans and other animals] inherit knowledge and skills acquired by [their] parents. All this complexity points to an evolutionary process in which genomes (over hundreds to thousands of generations), epigenetic modifications and inherited cultural factors (over several, perhaps tens or hundreds of generations), and parental effects (over single-generation timespans) collectively informb how organisms adapt. These extra-genetic kinds of inheritance give organisms the flexibility to make rapid adjustments to environmental challenges, dragging genetic change in their wake – much like a rowdy pack of dogs.

CAT/2018.1(RC)

Question. 54

The Emory University experiment with mice points to the inheritance of:

Comprehension

Directions for question: Read the passage carefully and answer the given questions accordingly

When researchers at Emory University in Atlanta trained mice to fear the smell of almonds (by pairing it with electric shocks), they found, to their consternation, that both the children and grandchildren of these mice were spontaneously afraid of the same smell. That is not supposed to happen. Generations of schoolchildren have been taught that the inheritance of acquired characteristics is impossible. A mouse should not be born with something its parents have learned during their lifetimes, any more than a mouse that loses its tail in an accident should give birth to tailless mice.

Modern evolutionary biology dates back to a synthesis that emerged around the 1940s60s, which married Charles Darwin’s mechanism of natural selection with Gregor Mendel’s discoveries of how genes are inherited. The traditional, and still dominant, view is that adaptations – from the human brain to the peacock’s tail – are fully and satisfactorily explained by natural selection (and subsequent inheritance). Yet [new evidence] from genomics, epigenetics and developmental biology [indicates] that evolution is more complex than we once assumed.

In his book On Human Nature (1978), the evolutionary biologist Edward O Wilson claimed that human culture is held on a genetic leash. The metaphor [needs revision]. Imagine a dogwalker (the genes) struggling to retain control of a brawny mastiff (human culture). The pair’s trajectory (the pathway of evolution) reflects the outcome of the struggle. Now imagine the same dog-walker struggling with multiple dogs, on leashes of varied lengths, with each dog tugging in different directions. All these tugs represent the influence of developmental factors, including epigenetics, antibodies and hormones passed on by parents, as well as the ecological legacies and culture they bequeath.

The received wisdom is that parental experiences can’t affect the characters of their offspring. Except they do. The way that genes are expressed to produce an organism’s phenotype– the actual characteristics it ends up with – is affected by chemicals that attach to them. Everything from diet to air pollution to parental behaviour can influence the addition or removal of these chemical marks, which switches genes on or off. Usually these so-called ‘epigenetic’ attachments are removed during the production of sperm and eggs cells, but it turns out that some escape the resetting process and are passed on to the next generation, along with the genes. This is known as ‘epigenetic inheritance’, and more and more studies are confirming that it really happens. Let’s return to the almond-fearing mice. The inheritance of an epigenetic mark transmitted in the sperm is what led the mice’s offspring to acquire an inherited fear.

Epigenetics is only part of the story. Through culture and society, [humans and other animals] inherit knowledge and skills acquired by [their] parents. All this complexity points to an evolutionary process in which genomes (over hundreds to thousands of generations), epigenetic modifications and inherited cultural factors (over several, perhaps tens or hundreds of generations), and parental effects (over single-generation timespans) collectively informb how organisms adapt. These extra-genetic kinds of inheritance give organisms the flexibility to make rapid adjustments to environmental challenges, dragging genetic change in their wake – much like a rowdy pack of dogs.

CAT/2018.1(RC)

Question. 55

Which of the following, if found to be true, would negate the main message of the passage?

Comprehension

Directions for question: Read the passage carefully and answer the given questions accordingly

Grove snails as a whole are distributed all over Europe, but a specific variety of the snail, with a distinctive white-lipped shell, is found exclusively in Ireland and in the Pyrenees mountains that lie on the border between France and Spain. The researchers sampled a total of 423 snail specimens from 36 sites distributed across Europe, with an emphasis on gathering large numbers of the white-lipped variety. When they sequenced genes from the mitochondrial DNA of each of these snails and used algorithms to analyze the genetic diversity between them, they found that a distinct lineage (the snails with the white-lipped shells) was indeed endemic to the two very specific and distant places in question.

Explaining this is tricky. Previously, some had speculated that the strange distributions of creatures such as the white-lipped grove snails could be explained by convergent evolution—in which two populations evolve the same trait by coincidence—but the underlying genetic similarities between the two groups rules that out. Alternately, some scientists had suggested that the white-lipped variety had simply spread over the whole continent, then been wiped out everywhere besides Ireland and the Pyrenees, but the researchers say their sampling and subsequent DNA analysis eliminate that possibility too.

“If the snails naturally colonized Ireland, you would expect to find some of the same genetic type in other areas of Europe, especially Britain. We just don’t find them,” Davidson, the lead author, said in a press statement.

Moreover, if they’d gradually spread across the continent, there would be some genetic variation within the white-lipped type, because evolution would introduce variety over the thousands of years it would have taken them to spread from the Pyrenees to Ireland. That variation doesn’t exist, at least in the genes sampled. This means that rather than the organism gradually expanding its range, large populations instead were somehow moved en mass to the other location within the space of a few dozen generations, ensuring a lack of genetic variety.

“There is a very clear pattern, which is difficult to explain except by involving humans,” Davidson said. Humans, after all, colonized Ireland roughly 9,000 years ago, and the oldest fossil evidence of grove snails in Ireland dates to roughly the same era. Additionally, there is archaeological evidence of early sea trade between the ancient peoples of Spain and Ireland via the Atlantic and even evidence that humans routinely ate these types of snails before the advent of agriculture, as their burnt shells have been found in Stone Age trash heaps.

The simplest explanation, then? Boats. These snails may have inadvertently traveled on the floor of the small, coast-hugging skiffs these early humans used for travel, or they may have been intentionally carried to Ireland by the seafarers as a food source. “The highways of the past were rivers and the ocean–as the river that flanks the Pyrenees was an ancient trade route to the Atlantic, what we’re actually seeing might be the long lasting legacy of snails that hitched a ride as humans travelled from the South of France to Ireland 8,000 years ago,” Davidson said.

CAT/2018.2(RC)

Question. 56

All of the following evidence supports the passage’s explanation of sea travel/trade EXCEPT:

Comprehension

Directions for question: Read the passage carefully and answer the given questions accordingly

Grove snails as a whole are distributed all over Europe, but a specific variety of the snail, with a distinctive white-lipped shell, is found exclusively in Ireland and in the Pyrenees mountains that lie on the border between France and Spain. The researchers sampled a total of 423 snail specimens from 36 sites distributed across Europe, with an emphasis on gathering large numbers of the white-lipped variety. When they sequenced genes from the mitochondrial DNA of each of these snails and used algorithms to analyze the genetic diversity between them, they found that a distinct lineage (the snails with the white-lipped shells) was indeed endemic to the two very specific and distant places in question.

Explaining this is tricky. Previously, some had speculated that the strange distributions of creatures such as the white-lipped grove snails could be explained by convergent evolution—in which two populations evolve the same trait by coincidence—but the underlying genetic similarities between the two groups rules that out. Alternately, some scientists had suggested that the white-lipped variety had simply spread over the whole continent, then been wiped out everywhere besides Ireland and the Pyrenees, but the researchers say their sampling and subsequent DNA analysis eliminate that possibility too.

“If the snails naturally colonized Ireland, you would expect to find some of the same genetic type in other areas of Europe, especially Britain. We just don’t find them,” Davidson, the lead author, said in a press statement.

Moreover, if they’d gradually spread across the continent, there would be some genetic variation within the white-lipped type, because evolution would introduce variety over the thousands of years it would have taken them to spread from the Pyrenees to Ireland. That variation doesn’t exist, at least in the genes sampled. This means that rather than the organism gradually expanding its range, large populations instead were somehow moved en mass to the other location within the space of a few dozen generations, ensuring a lack of genetic variety.

“There is a very clear pattern, which is difficult to explain except by involving humans,” Davidson said. Humans, after all, colonized Ireland roughly 9,000 years ago, and the oldest fossil evidence of grove snails in Ireland dates to roughly the same era. Additionally, there is archaeological evidence of early sea trade between the ancient peoples of Spain and Ireland via the Atlantic and even evidence that humans routinely ate these types of snails before the advent of agriculture, as their burnt shells have been found in Stone Age trash heaps.

The simplest explanation, then? Boats. These snails may have inadvertently traveled on the floor of the small, coast-hugging skiffs these early humans used for travel, or they may have been intentionally carried to Ireland by the seafarers as a food source. “The highways of the past were rivers and the ocean–as the river that flanks the Pyrenees was an ancient trade route to the Atlantic, what we’re actually seeing might be the long lasting legacy of snails that hitched a ride as humans travelled from the South of France to Ireland 8,000 years ago,” Davidson said.

CAT/2018.2(RC)

Question. 57

The passage outlines several hypotheses and evidence related to white-lipped grove snails to arrive at the most convincing explanation for:

Comprehension

Directions for question: Read the passage carefully and answer the given questions accordingly

Grove snails as a whole are distributed all over Europe, but a specific variety of the snail, with a distinctive white-lipped shell, is found exclusively in Ireland and in the Pyrenees mountains that lie on the border between France and Spain. The researchers sampled a total of 423 snail specimens from 36 sites distributed across Europe, with an emphasis on gathering large numbers of the white-lipped variety. When they sequenced genes from the mitochondrial DNA of each of these snails and used algorithms to analyze the genetic diversity between them, they found that a distinct lineage (the snails with the white-lipped shells) was indeed endemic to the two very specific and distant places in question.

Explaining this is tricky. Previously, some had speculated that the strange distributions of creatures such as the white-lipped grove snails could be explained by convergent evolution—in which two populations evolve the same trait by coincidence—but the underlying genetic similarities between the two groups rules that out. Alternately, some scientists had suggested that the white-lipped variety had simply spread over the whole continent, then been wiped out everywhere besides Ireland and the Pyrenees, but the researchers say their sampling and subsequent DNA analysis eliminate that possibility too.

“If the snails naturally colonized Ireland, you would expect to find some of the same genetic type in other areas of Europe, especially Britain. We just don’t find them,” Davidson, the lead author, said in a press statement.

Moreover, if they’d gradually spread across the continent, there would be some genetic variation within the white-lipped type, because evolution would introduce variety over the thousands of years it would have taken them to spread from the Pyrenees to Ireland. That variation doesn’t exist, at least in the genes sampled. This means that rather than the organism gradually expanding its range, large populations instead were somehow moved en mass to the other location within the space of a few dozen generations, ensuring a lack of genetic variety.

“There is a very clear pattern, which is difficult to explain except by involving humans,” Davidson said. Humans, after all, colonized Ireland roughly 9,000 years ago, and the oldest fossil evidence of grove snails in Ireland dates to roughly the same era. Additionally, there is archaeological evidence of early sea trade between the ancient peoples of Spain and Ireland via the Atlantic and even evidence that humans routinely ate these types of snails before the advent of agriculture, as their burnt shells have been found in Stone Age trash heaps.

The simplest explanation, then? Boats. These snails may have inadvertently traveled on the floor of the small, coast-hugging skiffs these early humans used for travel, or they may have been intentionally carried to Ireland by the seafarers as a food source. “The highways of the past were rivers and the ocean–as the river that flanks the Pyrenees was an ancient trade route to the Atlantic, what we’re actually seeing might be the long lasting legacy of snails that hitched a ride as humans travelled from the South of France to Ireland 8,000 years ago,” Davidson said.

CAT/2018.2(RC)

Question. 58

Which one of the following makes the author eliminate convergent evolution as a probable explanation for why white-lipped grove snails are found in Ireland and the Pyrenees?

Comprehension

Directions for question: Read the passage carefully and answer the given questions accordingly

Grove snails as a whole are distributed all over Europe, but a specific variety of the snail, with a distinctive white-lipped shell, is found exclusively in Ireland and in the Pyrenees mountains that lie on the border between France and Spain. The researchers sampled a total of 423 snail specimens from 36 sites distributed across Europe, with an emphasis on gathering large numbers of the white-lipped variety. When they sequenced genes from the mitochondrial DNA of each of these snails and used algorithms to analyze the genetic diversity between them, they found that a distinct lineage (the snails with the white-lipped shells) was indeed endemic to the two very specific and distant places in question.

Explaining this is tricky. Previously, some had speculated that the strange distributions of creatures such as the white-lipped grove snails could be explained by convergent evolution—in which two populations evolve the same trait by coincidence—but the underlying genetic similarities between the two groups rules that out. Alternately, some scientists had suggested that the white-lipped variety had simply spread over the whole continent, then been wiped out everywhere besides Ireland and the Pyrenees, but the researchers say their sampling and subsequent DNA analysis eliminate that possibility too.

“If the snails naturally colonized Ireland, you would expect to find some of the same genetic type in other areas of Europe, especially Britain. We just don’t find them,” Davidson, the lead author, said in a press statement.

Moreover, if they’d gradually spread across the continent, there would be some genetic variation within the white-lipped type, because evolution would introduce variety over the thousands of years it would have taken them to spread from the Pyrenees to Ireland. That variation doesn’t exist, at least in the genes sampled. This means that rather than the organism gradually expanding its range, large populations instead were somehow moved en mass to the other location within the space of a few dozen generations, ensuring a lack of genetic variety.

“There is a very clear pattern, which is difficult to explain except by involving humans,” Davidson said. Humans, after all, colonized Ireland roughly 9,000 years ago, and the oldest fossil evidence of grove snails in Ireland dates to roughly the same era. Additionally, there is archaeological evidence of early sea trade between the ancient peoples of Spain and Ireland via the Atlantic and even evidence that humans routinely ate these types of snails before the advent of agriculture, as their burnt shells have been found in Stone Age trash heaps.

The simplest explanation, then? Boats. These snails may have inadvertently traveled on the floor of the small, coast-hugging skiffs these early humans used for travel, or they may have been intentionally carried to Ireland by the seafarers as a food source. “The highways of the past were rivers and the ocean–as the river that flanks the Pyrenees was an ancient trade route to the Atlantic, what we’re actually seeing might be the long lasting legacy of snails that hitched a ride as humans travelled from the South of France to Ireland 8,000 years ago,” Davidson said.

CAT/2018.2(RC)

Question. 59

In paragraph 4, the evidence that “humans routinely ate these types of snails before the advent of agriculture” can be used to conclude that:

Comprehension

Directions for question: Read the passage carefully and answer the given questions accordingly

NOT everything looks lovelier the longer and closer its inspection. But Saturn does. It is gorgeous through Earthly telescopes. However, the 13 years of close observation provided by Cassini, an American spacecraft, showed the planet, its moons and its remarkable rings off better and better, revealing finer structures, striking novelties and greater drama.

By and large the big things in the solar system—planets and moons—are thought of as having been around since the beginning. The suggestion that rings and moons are new is, though, made even more interesting by the fact that one of those moons, Enceladus, is widely considered the most promising site in the solar system on which to look for alien life. If Enceladus is both young and bears life, that life must have come into being quickly. This is also believed to have been the case on Earth. Were it true on Enceladus, that would encourage the idea that life evolves easily when conditions are right.

One reason for thinking Saturn’s rings are young is that they are bright. The solar system is suffused with comet dust, and comet dust is dark. Leaving Saturn’s ring system (which Cassini has shown to be more than 90% water ice) out in such a mist is like leaving laundry hanging on a line downwind from a smokestack: it will get dirty. The lighter the rings are, the faster this will happen, for the less mass they contain, the less celestial pollution they can absorb before they start to discolour Jeff Cuzzi, a scientist at America’s space agency, NASA, who helped run Cassini, told the Lunar and Planetary Science Conference in Houston that combining the mass estimates with Cassini’s measurements of the density of comet-dust near Saturn suggests the rings are no older than the first dinosaurs, nor younger than the last of them—that is, they are somewhere between 200m and 70m years old.

That timing fits well with a theory put forward in 2016, by Matija Cuk of the SETI Institute, in California and his colleagues. They suggest that at around the same time as the rings came into being an old set of moons orbiting Saturn destroyed themselves, and from their remains emerged not only the rings but also the planet’s current suite of inner moons—Rhea, Dione, Tethys, Enceladus and Mimas.

Dr Cuk and his colleagues used computer simulations of Saturn’s moons’ orbits as a sort of time machine. Looking at the rate at which tidal friction is causing these orbits to lengthen they extrapolated backwards to find out what those orbits would have looked like in the past. They discovered that about 100m years ago the orbits of two of them, Tethys and Dione, would have interacted in a way that left the planes in which they orbit markedly tilted. But their orbits are untilted. The obvious, if unsettling, conclusion was that this interaction never happened—and thus that at the time when it should have happened, Dione and Tethys were simply not there. They must have come into being later.

CAT/2018.2(RC)

Question. 60

The phrase “leaving laundry hanging on a line downwind from a smokestack” is used to explain how the ringed planet’s:

Comprehension

Directions for question: Read the passage carefully and answer the given questions accordingly

NOT everything looks lovelier the longer and closer its inspection. But Saturn does. It is gorgeous through Earthly telescopes. However, the 13 years of close observation provided by Cassini, an American spacecraft, showed the planet, its moons and its remarkable rings off better and better, revealing finer structures, striking novelties and greater drama.

By and large the big things in the solar system—planets and moons—are thought of as having been around since the beginning. The suggestion that rings and moons are new is, though, made even more interesting by the fact that one of those moons, Enceladus, is widely considered the most promising site in the solar system on which to look for alien life. If Enceladus is both young and bears life, that life must have come into being quickly. This is also believed to have been the case on Earth. Were it true on Enceladus, that would encourage the idea that life evolves easily when conditions are right.

One reason for thinking Saturn’s rings are young is that they are bright. The solar system is suffused with comet dust, and comet dust is dark. Leaving Saturn’s ring system (which Cassini has shown to be more than 90% water ice) out in such a mist is like leaving laundry hanging on a line downwind from a smokestack: it will get dirty. The lighter the rings are, the faster this will happen, for the less mass they contain, the less celestial pollution they can absorb before they start to discolour Jeff Cuzzi, a scientist at America’s space agency, NASA, who helped run Cassini, told the Lunar and Planetary Science Conference in Houston that combining the mass estimates with Cassini’s measurements of the density of comet-dust near Saturn suggests the rings are no older than the first dinosaurs, nor younger than the last of them—that is, they are somewhere between 200m and 70m years old.

That timing fits well with a theory put forward in 2016, by Matija Cuk of the SETI Institute, in California and his colleagues. They suggest that at around the same time as the rings came into being an old set of moons orbiting Saturn destroyed themselves, and from their remains emerged not only the rings but also the planet’s current suite of inner moons—Rhea, Dione, Tethys, Enceladus and Mimas.

Dr Cuk and his colleagues used computer simulations of Saturn’s moons’ orbits as a sort of time machine. Looking at the rate at which tidal friction is causing these orbits to lengthen they extrapolated backwards to find out what those orbits would have looked like in the past. They discovered that about 100m years ago the orbits of two of them, Tethys and Dione, would have interacted in a way that left the planes in which they orbit markedly tilted. But their orbits are untilted. The obvious, if unsettling, conclusion was that this interaction never happened—and thus that at the time when it should have happened, Dione and Tethys were simply not there. They must have come into being later.

CAT/2018.2(RC)

Question. 61

Data provided by Cassini challenged the assumption that:

Comprehension

Directions for question: Read the passage carefully and answer the given questions accordingly

NOT everything looks lovelier the longer and closer its inspection. But Saturn does. It is gorgeous through Earthly telescopes. However, the 13 years of close observation provided by Cassini, an American spacecraft, showed the planet, its moons and its remarkable rings off better and better, revealing finer structures, striking novelties and greater drama.

By and large the big things in the solar system—planets and moons—are thought of as having been around since the beginning. The suggestion that rings and moons are new is, though, made even more interesting by the fact that one of those moons, Enceladus, is widely considered the most promising site in the solar system on which to look for alien life. If Enceladus is both young and bears life, that life must have come into being quickly. This is also believed to have been the case on Earth. Were it true on Enceladus, that would encourage the idea that life evolves easily when conditions are right.

One reason for thinking Saturn’s rings are young is that they are bright. The solar system is suffused with comet dust, and comet dust is dark. Leaving Saturn’s ring system (which Cassini has shown to be more than 90% water ice) out in such a mist is like leaving laundry hanging on a line downwind from a smokestack: it will get dirty. The lighter the rings are, the faster this will happen, for the less mass they contain, the less celestial pollution they can absorb before they start to discolour Jeff Cuzzi, a scientist at America’s space agency, NASA, who helped run Cassini, told the Lunar and Planetary Science Conference in Houston that combining the mass estimates with Cassini’s measurements of the density of comet-dust near Saturn suggests the rings are no older than the first dinosaurs, nor younger than the last of them—that is, they are somewhere between 200m and 70m years old.

That timing fits well with a theory put forward in 2016, by Matija Cuk of the SETI Institute, in California and his colleagues. They suggest that at around the same time as the rings came into being an old set of moons orbiting Saturn destroyed themselves, and from their remains emerged not only the rings but also the planet’s current suite of inner moons—Rhea, Dione, Tethys, Enceladus and Mimas.

Dr Cuk and his colleagues used computer simulations of Saturn’s moons’ orbits as a sort of time machine. Looking at the rate at which tidal friction is causing these orbits to lengthen they extrapolated backwards to find out what those orbits would have looked like in the past. They discovered that about 100m years ago the orbits of two of them, Tethys and Dione, would have interacted in a way that left the planes in which they orbit markedly tilted. But their orbits are untilted. The obvious, if unsettling, conclusion was that this interaction never happened—and thus that at the time when it should have happened, Dione and Tethys were simply not there. They must have come into being later.

CAT/2018.2(RC)

Question. 62

Based on information provided in the passage, we can infer that, in addition to water ice, Saturn’s rings might also have small amounts of:

Comprehension

Directions for question: Read the passage carefully and answer the given questions accordingly

NOT everything looks lovelier the longer and closer its inspection. But Saturn does. It is gorgeous through Earthly telescopes. However, the 13 years of close observation provided by Cassini, an American spacecraft, showed the planet, its moons and its remarkable rings off better and better, revealing finer structures, striking novelties and greater drama.

By and large the big things in the solar system—planets and moons—are thought of as having been around since the beginning. The suggestion that rings and moons are new is, though, made even more interesting by the fact that one of those moons, Enceladus, is widely considered the most promising site in the solar system on which to look for alien life. If Enceladus is both young and bears life, that life must have come into being quickly. This is also believed to have been the case on Earth. Were it true on Enceladus, that would encourage the idea that life evolves easily when conditions are right.

One reason for thinking Saturn’s rings are young is that they are bright. The solar system is suffused with comet dust, and comet dust is dark. Leaving Saturn’s ring system (which Cassini has shown to be more than 90% water ice) out in such a mist is like leaving laundry hanging on a line downwind from a smokestack: it will get dirty. The lighter the rings are, the faster this will happen, for the less mass they contain, the less celestial pollution they can absorb before they start to discolour Jeff Cuzzi, a scientist at America’s space agency, NASA, who helped run Cassini, told the Lunar and Planetary Science Conference in Houston that combining the mass estimates with Cassini’s measurements of the density of comet-dust near Saturn suggests the rings are no older than the first dinosaurs, nor younger than the last of them—that is, they are somewhere between 200m and 70m years old.

That timing fits well with a theory put forward in 2016, by Matija Cuk of the SETI Institute, in California and his colleagues. They suggest that at around the same time as the rings came into being an old set of moons orbiting Saturn destroyed themselves, and from their remains emerged not only the rings but also the planet’s current suite of inner moons—Rhea, Dione, Tethys, Enceladus and Mimas.

Dr Cuk and his colleagues used computer simulations of Saturn’s moons’ orbits as a sort of time machine. Looking at the rate at which tidal friction is causing these orbits to lengthen they extrapolated backwards to find out what those orbits would have looked like in the past. They discovered that about 100m years ago the orbits of two of them, Tethys and Dione, would have interacted in a way that left the planes in which they orbit markedly tilted. But their orbits are untilted. The obvious, if unsettling, conclusion was that this interaction never happened—and thus that at the time when it should have happened, Dione and Tethys were simply not there. They must have come into being later.

CAT/2018.2(RC)

Question. 63

The main objective of the passage is to:

Comprehension

Directions for question: Read the passage carefully and answer the given questions accordingly

NOT everything looks lovelier the longer and closer its inspection. But Saturn does. It is gorgeous through Earthly telescopes. However, the 13 years of close observation provided by Cassini, an American spacecraft, showed the planet, its moons and its remarkable rings off better and better, revealing finer structures, striking novelties and greater drama.

By and large the big things in the solar system—planets and moons—are thought of as having been around since the beginning. The suggestion that rings and moons are new is, though, made even more interesting by the fact that one of those moons, Enceladus, is widely considered the most promising site in the solar system on which to look for alien life. If Enceladus is both young and bears life, that life must have come into being quickly. This is also believed to have been the case on Earth. Were it true on Enceladus, that would encourage the idea that life evolves easily when conditions are right.

One reason for thinking Saturn’s rings are young is that they are bright. The solar system is suffused with comet dust, and comet dust is dark. Leaving Saturn’s ring system (which Cassini has shown to be more than 90% water ice) out in such a mist is like leaving laundry hanging on a line downwind from a smokestack: it will get dirty. The lighter the rings are, the faster this will happen, for the less mass they contain, the less celestial pollution they can absorb before they start to discolour Jeff Cuzzi, a scientist at America’s space agency, NASA, who helped run Cassini, told the Lunar and Planetary Science Conference in Houston that combining the mass estimates with Cassini’s measurements of the density of comet-dust near Saturn suggests the rings are no older than the first dinosaurs, nor younger than the last of them—that is, they are somewhere between 200m and 70m years old.

That timing fits well with a theory put forward in 2016, by Matija Cuk of the SETI Institute, in California and his colleagues. They suggest that at around the same time as the rings came into being an old set of moons orbiting Saturn destroyed themselves, and from their remains emerged not only the rings but also the planet’s current suite of inner moons—Rhea, Dione, Tethys, Enceladus and Mimas.

Dr Cuk and his colleagues used computer simulations of Saturn’s moons’ orbits as a sort of time machine. Looking at the rate at which tidal friction is causing these orbits to lengthen they extrapolated backwards to find out what those orbits would have looked like in the past. They discovered that about 100m years ago the orbits of two of them, Tethys and Dione, would have interacted in a way that left the planes in which they orbit markedly tilted. But their orbits are untilted. The obvious, if unsettling, conclusion was that this interaction never happened—and thus that at the time when it should have happened, Dione and Tethys were simply not there. They must have come into being later.

CAT/2018.2(RC)

Question. 64

Based on information provided in the passage, we can conclude all of the following EXCEPT:

Comprehension

Directions for question: Read the passage carefully and answer the given questions accordingly

Scientists have long recognised the incredible diversity within a species. But they thought it reflected evolutionary changes that unfolded imperceptibly, over millions of years. That divergence between populations within a species was enforced, according to Ernst Mayr, the great evolutionary biologist of the 1940s, when a population was separated from the rest of the species by a mountain range or a desert, preventing breeding across the divide over geologic scales of time. Without the separation, gene flow was relentless. But as the separation persisted, the isolated population grew apart and speciation occurred.

In the mid-1960s, the biologist Paul Ehrlich - author of The Population Bomb (1968) - and his Stanford University colleague Peter Raven challenged Mayr's ideas about speciation. They had studied checkerspot butterflies living in the Jasper Ridge Biological Preserve in California, and it soon became clear that they were not examining a single population. Through years of capturing, marking and then recapturing the butterflies, they were able to prove that within the population, spread over just 50 acres of suitable checkerspot habitat, there were three groups that rarely interacted despite their very close proximity.

Among other ideas, Ehrlich and Raven argued in a now classic paper from 1969 that gene flow was not as predictable and ubiquitous as Mayr and his cohort maintained, and thus evolutionary divergence between neighbouring groups in a population was probably common. They also asserted that isolation and gene flow were less important to evolutionary divergence than natural selection (when factors such as mate choice, weather, disease or predation cause better-adapted individuals to survive and pass on their successful genetic traits). For example, Ehrlich and Raven suggested that, without the force of natural selection, an isolated population would remain unchanged and that, in other scenarios, natural selection could be strong enough to overpower gene flow...

CAT/2017.1(RC)

Question. 65

Which of the following best sums up Ehrlich and Raven's argument in their classic 1969 paper?

Comprehension

Directions for question: Read the passage carefully and answer the given questions accordingly

Scientists have long recognised the incredible diversity within a species. But they thought it reflected evolutionary changes that unfolded imperceptibly, over millions of years. That divergence between populations within a species was enforced, according to Ernst Mayr, the great evolutionary biologist of the 1940s, when a population was separated from the rest of the species by a mountain range or a desert, preventing breeding across the divide over geologic scales of time. Without the separation, gene flow was relentless. But as the separation persisted, the isolated population grew apart and speciation occurred.

In the mid-1960s, the biologist Paul Ehrlich - author of The Population Bomb (1968) - and his Stanford University colleague Peter Raven challenged Mayr's ideas about speciation. They had studied checkerspot butterflies living in the Jasper Ridge Biological Preserve in California, and it soon became clear that they were not examining a single population. Through years of capturing, marking and then recapturing the butterflies, they were able to prove that within the population, spread over just 50 acres of suitable checkerspot habitat, there were three groups that rarely interacted despite their very close proximity.

Among other ideas, Ehrlich and Raven argued in a now classic paper from 1969 that gene flow was not as predictable and ubiquitous as Mayr and his cohort maintained, and thus evolutionary divergence between neighbouring groups in a population was probably common. They also asserted that isolation and gene flow were less important to evolutionary divergence than natural selection (when factors such as mate choice, weather, disease or predation cause better-adapted individuals to survive and pass on their successful genetic traits). For example, Ehrlich and Raven suggested that, without the force of natural selection, an isolated population would remain unchanged and that, in other scenarios, natural selection could be strong enough to overpower gene flow...

CAT/2017.1(RC)

Question. 66

All of the following statements are true according to the passage EXCEPT

Comprehension

Directions for question: Read the passage carefully and answer the given questions accordingly

Scientists have long recognised the incredible diversity within a species. But they thought it reflected evolutionary changes that unfolded imperceptibly, over millions of years. That divergence between populations within a species was enforced, according to Ernst Mayr, the great evolutionary biologist of the 1940s, when a population was separated from the rest of the species by a mountain range or a desert, preventing breeding across the divide over geologic scales of time. Without the separation, gene flow was relentless. But as the separation persisted, the isolated population grew apart and speciation occurred.

In the mid-1960s, the biologist Paul Ehrlich - author of The Population Bomb (1968) - and his Stanford University colleague Peter Raven challenged Mayr's ideas about speciation. They had studied checkerspot butterflies living in the Jasper Ridge Biological Preserve in California, and it soon became clear that they were not examining a single population. Through years of capturing, marking and then recapturing the butterflies, they were able to prove that within the population, spread over just 50 acres of suitable checkerspot habitat, there were three groups that rarely interacted despite their very close proximity.

Among other ideas, Ehrlich and Raven argued in a now classic paper from 1969 that gene flow was not as predictable and ubiquitous as Mayr and his cohort maintained, and thus evolutionary divergence between neighbouring groups in a population was probably common. They also asserted that isolation and gene flow were less important to evolutionary divergence than natural selection (when factors such as mate choice, weather, disease or predation cause better-adapted individuals to survive and pass on their successful genetic traits). For example, Ehrlich and Raven suggested that, without the force of natural selection, an isolated population would remain unchanged and that, in other scenarios, natural selection could be strong enough to overpower gene flow...

CAT/2017.1(RC)

Question. 67

The author discusses Mayr, Ehrlich and Raven to demonstrate that

Comprehension

Directions for question: Read the passage carefuly and answer the given questions accordingly

During the frigid season... it's often necessary to nestle under a blanket to try to stay warm. The temperature difference between the blanket and the air outside is so palpable that we often have trouble leaving our warm refuge. Many plants and animals similarly hunker down, relying on snow cover for safety from winter's harsh conditions. The small area between the snowpack and the ground, called the subnivium... might be the most important ecosystem that you have never heard of.

The subnivium is so well-insulated and stable that its temperature holds steady at around 32 degree Fahrenheit (0 degree Celsius). Although that might still sound cold, a constant temperature of 32 degree Fahrenheit can often be 30 to 40 degrees warmer than the air temperature during the peak of winter. Because of this large temperature difference, a wide variety of species...depend on the subnivium for winter protection.

For many organisms living in temperate and Arctic regions, the difference between being under the snow or outside it is a matter of life and death. Consequently, disruptions to the subnivium brought about by climate change will affect everything from population dynamics to nutrient cycling through the ecosystem.

The formation and stability of the subnivium requires more than a few flurries. Winter ecologists have suggested that eight inches of snow is necessary to develop a stable layer of insulation. Depth is not the only factor, however. More accurately, the stability of the subnivium depends on the interaction between snow depth and snow density. Imagine being under a stack of blankets that are all flattened and pressed together. When compressed, the blankets essentially form one compacted layer. In contrast, when they are lightly placed on top of one another, their insulative capacity increases because the air pockets between them trap heat. Greater depths of low-density snow are therefore better at insulating the ground.

Both depth and density of snow are sensitive to temperature. Scientists are now beginning to explore how climate change will affect the subnivium, as well as the species that depend on it. At first glance, warmer winters seem beneficial for species that have difficulty surviving subzero temperatures; however, as with most ecological phenomena, the consequences are not so straightforward. Research has shown that the snow season (the period when snow is more likely than rain) has become shorter since l970. When rain falls on snow, it increases the density of the snow and reduces its insulative capacity. Therefore, even though winters are expected to become warmer overall from future climate change, the subnivium will tend to become colder and more variable with less protection from the above-ground temperatures.

The effects of a colder subnivium are complex... For example, shrubs such as crowberry and alpine azalea that grow along the forest floor tend to block the wind and so retain higher depths of snow around them. This captured snow helps to keep soils insulated and in turn increases plant decomposition and nutrient release. In field experiments, researchers removed a portion. of the snow cover to investigate the importance of the subnivium's insulation. They found that soil frost in the snow-free area resulted in damage to plant roots and sometimes even the death of the plant.

CAT/2017.2(RC)

Question. 68

The purpose of this passage is to

Comprehension

Directions for question: Read the passage carefuly and answer the given questions accordingly

During the frigid season... it's often necessary to nestle under a blanket to try to stay warm. The temperature difference between the blanket and the air outside is so palpable that we often have trouble leaving our warm refuge. Many plants and animals similarly hunker down, relying on snow cover for safety from winter's harsh conditions. The small area between the snowpack and the ground, called the subnivium... might be the most important ecosystem that you have never heard of.

The subnivium is so well-insulated and stable that its temperature holds steady at around 32 degree Fahrenheit (0 degree Celsius). Although that might still sound cold, a constant temperature of 32 degree Fahrenheit can often be 30 to 40 degrees warmer than the air temperature during the peak of winter. Because of this large temperature difference, a wide variety of species...depend on the subnivium for winter protection.

For many organisms living in temperate and Arctic regions, the difference between being under the snow or outside it is a matter of life and death. Consequently, disruptions to the subnivium brought about by climate change will affect everything from population dynamics to nutrient cycling through the ecosystem.

The formation and stability of the subnivium requires more than a few flurries. Winter ecologists have suggested that eight inches of snow is necessary to develop a stable layer of insulation. Depth is not the only factor, however. More accurately, the stability of the subnivium depends on the interaction between snow depth and snow density. Imagine being under a stack of blankets that are all flattened and pressed together. When compressed, the blankets essentially form one compacted layer. In contrast, when they are lightly placed on top of one another, their insulative capacity increases because the air pockets between them trap heat. Greater depths of low-density snow are therefore better at insulating the ground.

Both depth and density of snow are sensitive to temperature. Scientists are now beginning to explore how climate change will affect the subnivium, as well as the species that depend on it. At first glance, warmer winters seem beneficial for species that have difficulty surviving subzero temperatures; however, as with most ecological phenomena, the consequences are not so straightforward. Research has shown that the snow season (the period when snow is more likely than rain) has become shorter since l970. When rain falls on snow, it increases the density of the snow and reduces its insulative capacity. Therefore, even though winters are expected to become warmer overall from future climate change, the subnivium will tend to become colder and more variable with less protection from the above-ground temperatures.

The effects of a colder subnivium are complex... For example, shrubs such as crowberry and alpine azalea that grow along the forest floor tend to block the wind and so retain higher depths of snow around them. This captured snow helps to keep soils insulated and in turn increases plant decomposition and nutrient release. In field experiments, researchers removed a portion. of the snow cover to investigate the importance of the subnivium's insulation. They found that soil frost in the snow-free area resulted in damage to plant roots and sometimes even the death of the plant.

CAT/2017.2(RC)

Question. 69

All of the following statements are true EXCEPT

Comprehension

Directions for question: Read the passage carefuly and answer the given questions accordingly

During the frigid season... it's often necessary to nestle under a blanket to try to stay warm. The temperature difference between the blanket and the air outside is so palpable that we often have trouble leaving our warm refuge. Many plants and animals similarly hunker down, relying on snow cover for safety from winter's harsh conditions. The small area between the snowpack and the ground, called the subnivium... might be the most important ecosystem that you have never heard of.

The subnivium is so well-insulated and stable that its temperature holds steady at around 32 degree Fahrenheit (0 degree Celsius). Although that might still sound cold, a constant temperature of 32 degree Fahrenheit can often be 30 to 40 degrees warmer than the air temperature during the peak of winter. Because of this large temperature difference, a wide variety of species...depend on the subnivium for winter protection.

For many organisms living in temperate and Arctic regions, the difference between being under the snow or outside it is a matter of life and death. Consequently, disruptions to the subnivium brought about by climate change will affect everything from population dynamics to nutrient cycling through the ecosystem.

The formation and stability of the subnivium requires more than a few flurries. Winter ecologists have suggested that eight inches of snow is necessary to develop a stable layer of insulation. Depth is not the only factor, however. More accurately, the stability of the subnivium depends on the interaction between snow depth and snow density. Imagine being under a stack of blankets that are all flattened and pressed together. When compressed, the blankets essentially form one compacted layer. In contrast, when they are lightly placed on top of one another, their insulative capacity increases because the air pockets between them trap heat. Greater depths of low-density snow are therefore better at insulating the ground.

Both depth and density of snow are sensitive to temperature. Scientists are now beginning to explore how climate change will affect the subnivium, as well as the species that depend on it. At first glance, warmer winters seem beneficial for species that have difficulty surviving subzero temperatures; however, as with most ecological phenomena, the consequences are not so straightforward. Research has shown that the snow season (the period when snow is more likely than rain) has become shorter since l970. When rain falls on snow, it increases the density of the snow and reduces its insulative capacity. Therefore, even though winters are expected to become warmer overall from future climate change, the subnivium will tend to become colder and more variable with less protection from the above-ground temperatures.

The effects of a colder subnivium are complex... For example, shrubs such as crowberry and alpine azalea that grow along the forest floor tend to block the wind and so retain higher depths of snow around them. This captured snow helps to keep soils insulated and in turn increases plant decomposition and nutrient release. In field experiments, researchers removed a portion. of the snow cover to investigate the importance of the subnivium's insulation. They found that soil frost in the snow-free area resulted in damage to plant roots and sometimes even the death of the plant.

CAT/2017.2(RC)

Question. 70

Based on this extract, the author would support which one of the following actions?

Comprehension

Directions for question: Read the passage carefuly and answer the given questions accordingly

During the frigid season... it's often necessary to nestle under a blanket to try to stay warm. The temperature difference between the blanket and the air outside is so palpable that we often have trouble leaving our warm refuge. Many plants and animals similarly hunker down, relying on snow cover for safety from winter's harsh conditions. The small area between the snowpack and the ground, called the subnivium... might be the most important ecosystem that you have never heard of.

The subnivium is so well-insulated and stable that its temperature holds steady at around 32 degree Fahrenheit (0 degree Celsius). Although that might still sound cold, a constant temperature of 32 degree Fahrenheit can often be 30 to 40 degrees warmer than the air temperature during the peak of winter. Because of this large temperature difference, a wide variety of species...depend on the subnivium for winter protection.

For many organisms living in temperate and Arctic regions, the difference between being under the snow or outside it is a matter of life and death. Consequently, disruptions to the subnivium brought about by climate change will affect everything from population dynamics to nutrient cycling through the ecosystem.

The formation and stability of the subnivium requires more than a few flurries. Winter ecologists have suggested that eight inches of snow is necessary to develop a stable layer of insulation. Depth is not the only factor, however. More accurately, the stability of the subnivium depends on the interaction between snow depth and snow density. Imagine being under a stack of blankets that are all flattened and pressed together. When compressed, the blankets essentially form one compacted layer. In contrast, when they are lightly placed on top of one another, their insulative capacity increases because the air pockets between them trap heat. Greater depths of low-density snow are therefore better at insulating the ground.

Both depth and density of snow are sensitive to temperature. Scientists are now beginning to explore how climate change will affect the subnivium, as well as the species that depend on it. At first glance, warmer winters seem beneficial for species that have difficulty surviving subzero temperatures; however, as with most ecological phenomena, the consequences are not so straightforward. Research has shown that the snow season (the period when snow is more likely than rain) has become shorter since l970. When rain falls on snow, it increases the density of the snow and reduces its insulative capacity. Therefore, even though winters are expected to become warmer overall from future climate change, the subnivium will tend to become colder and more variable with less protection from the above-ground temperatures.

The effects of a colder subnivium are complex... For example, shrubs such as crowberry and alpine azalea that grow along the forest floor tend to block the wind and so retain higher depths of snow around them. This captured snow helps to keep soils insulated and in turn increases plant decomposition and nutrient release. In field experiments, researchers removed a portion. of the snow cover to investigate the importance of the subnivium's insulation. They found that soil frost in the snow-free area resulted in damage to plant roots and sometimes even the death of the plant.

CAT/2017.2(RC)

Question. 71

In paragraph 6, the author provides the examples of crowberry and alpine azalea to demonstrate that

Comprehension

Directions for question: Read the passage carefuly and answer the given questions accordingly

During the frigid season... it's often necessary to nestle under a blanket to try to stay warm. The temperature difference between the blanket and the air outside is so palpable that we often have trouble leaving our warm refuge. Many plants and animals similarly hunker down, relying on snow cover for safety from winter's harsh conditions. The small area between the snowpack and the ground, called the subnivium... might be the most important ecosystem that you have never heard of.

The subnivium is so well-insulated and stable that its temperature holds steady at around 32 degree Fahrenheit (0 degree Celsius). Although that might still sound cold, a constant temperature of 32 degree Fahrenheit can often be 30 to 40 degrees warmer than the air temperature during the peak of winter. Because of this large temperature difference, a wide variety of species...depend on the subnivium for winter protection.

For many organisms living in temperate and Arctic regions, the difference between being under the snow or outside it is a matter of life and death. Consequently, disruptions to the subnivium brought about by climate change will affect everything from population dynamics to nutrient cycling through the ecosystem.

The formation and stability of the subnivium requires more than a few flurries. Winter ecologists have suggested that eight inches of snow is necessary to develop a stable layer of insulation. Depth is not the only factor, however. More accurately, the stability of the subnivium depends on the interaction between snow depth and snow density. Imagine being under a stack of blankets that are all flattened and pressed together. When compressed, the blankets essentially form one compacted layer. In contrast, when they are lightly placed on top of one another, their insulative capacity increases because the air pockets between them trap heat. Greater depths of low-density snow are therefore better at insulating the ground.

Both depth and density of snow are sensitive to temperature. Scientists are now beginning to explore how climate change will affect the subnivium, as well as the species that depend on it. At first glance, warmer winters seem beneficial for species that have difficulty surviving subzero temperatures; however, as with most ecological phenomena, the consequences are not so straightforward. Research has shown that the snow season (the period when snow is more likely than rain) has become shorter since l970. When rain falls on snow, it increases the density of the snow and reduces its insulative capacity. Therefore, even though winters are expected to become warmer overall from future climate change, the subnivium will tend to become colder and more variable with less protection from the above-ground temperatures.

The effects of a colder subnivium are complex... For example, shrubs such as crowberry and alpine azalea that grow along the forest floor tend to block the wind and so retain higher depths of snow around them. This captured snow helps to keep soils insulated and in turn increases plant decomposition and nutrient release. In field experiments, researchers removed a portion. of the snow cover to investigate the importance of the subnivium's insulation. They found that soil frost in the snow-free area resulted in damage to plant roots and sometimes even the death of the plant.

CAT/2017.2(RC)

Question. 72

Which one of the following statements can be inferred from the passage?

Comprehension

Directions for question: Read the passage carefuly and answer the given questions accordingly

During the frigid season... it's often necessary to nestle under a blanket to try to stay warm. The temperature difference between the blanket and the air outside is so palpable that we often have trouble leaving our warm refuge. Many plants and animals similarly hunker down, relying on snow cover for safety from winter's harsh conditions. The small area between the snowpack and the ground, called the subnivium... might be the most important ecosystem that you have never heard of.

The subnivium is so well-insulated and stable that its temperature holds steady at around 32 degree Fahrenheit (0 degree Celsius). Although that might still sound cold, a constant temperature of 32 degree Fahrenheit can often be 30 to 40 degrees warmer than the air temperature during the peak of winter. Because of this large temperature difference, a wide variety of species...depend on the subnivium for winter protection.

For many organisms living in temperate and Arctic regions, the difference between being under the snow or outside it is a matter of life and death. Consequently, disruptions to the subnivium brought about by climate change will affect everything from population dynamics to nutrient cycling through the ecosystem.

The formation and stability of the subnivium requires more than a few flurries. Winter ecologists have suggested that eight inches of snow is necessary to develop a stable layer of insulation. Depth is not the only factor, however. More accurately, the stability of the subnivium depends on the interaction between snow depth and snow density. Imagine being under a stack of blankets that are all flattened and pressed together. When compressed, the blankets essentially form one compacted layer. In contrast, when they are lightly placed on top of one another, their insulative capacity increases because the air pockets between them trap heat. Greater depths of low-density snow are therefore better at insulating the ground.

Both depth and density of snow are sensitive to temperature. Scientists are now beginning to explore how climate change will affect the subnivium, as well as the species that depend on it. At first glance, warmer winters seem beneficial for species that have difficulty surviving subzero temperatures; however, as with most ecological phenomena, the consequences are not so straightforward. Research has shown that the snow season (the period when snow is more likely than rain) has become shorter since l970. When rain falls on snow, it increases the density of the snow and reduces its insulative capacity. Therefore, even though winters are expected to become warmer overall from future climate change, the subnivium will tend to become colder and more variable with less protection from the above-ground temperatures.

The effects of a colder subnivium are complex... For example, shrubs such as crowberry and alpine azalea that grow along the forest floor tend to block the wind and so retain higher depths of snow around them. This captured snow helps to keep soils insulated and in turn increases plant decomposition and nutrient release. In field experiments, researchers removed a portion. of the snow cover to investigate the importance of the subnivium's insulation. They found that soil frost in the snow-free area resulted in damage to plant roots and sometimes even the death of the plant.

CAT/2017.2(RC)

Question. 73

In paragraph 1, the author uses blankets as a device to

Comprehension

Directions for the question: Read the passage carefully and answer the given questions accordingly.

To discover the relation between rules, paradigms, and normal science, consider first how the historian isolates the particular loci of commitment that have been described as accepted rules. Close historical investigation of a given specialty at a given time discloses a set of recurrent and quasi-standard illustrations of various theories in their conceptual, observational, and instrumental applications. These are the community’s paradigms, revealed in its textbooks, lectures, and laboratory exercises. By studying them and by practicing with them, the members of the corresponding community learn their trade. The historian, of course, will discover in addition a penumbral area occupied by achievements whose status is still in doubt, but the core of solved problems and techniques will usually be clear. Despite occasional ambiguities, the paradigms of a mature scientific community can be determined with relative ease.

That demands a second step and one of a somewhat different kind. When undertaking it, the historian must compare the community’s paradigms with each other and with its current research reports. In doing so, his object is to discover what isolable elements, explicit or implicit, the members of that community may have abstracted from their more global paradigms and deploy it as rules in their research. Anyone who has attempted to describe or analyze the evolution of a particular scientific tradition will necessarily have sought accepted principles and rules of this sort. Almost certainly, he will have met with at least partial success. But, if his experience has been at all like my own, he will have found the search for rules both more difficult and less satisfying than the search for paradigms. Some of the generalizations he employs to describe the community’s shared beliefs will present more problems. Others, however, will seem a shade too strong. Phrased in just that way, or in any other way he can imagine, they would almost certainly have been rejected by some members of the group he studies. Nevertheless, if the coherence of the research tradition is to be understood in terms of rules, some specification of common ground in the corresponding area is needed. As a result, the search for a body of rules competent to constitute a given normal research tradition becomes a source of continual and deep frustration.

Recognizing that frustration, however, makes it possible to diagnose its source. Scientists can agree that Newton, Lavoisier, Maxwell, or Einstein has produced an apparently permanent solution to a group of outstanding problems and still disagree, sometimes without being aware of it, about the particular abstract characteristics that make those solutions permanent. They can, that is, agree in their identification of a paradigm without agreeing on, or even attempting to produce, a full interpretation or rationalization of it. Lack of a standard interpretation or of an agreed reduction to rules will not prevent a paradigm from guiding research. Normal science can be determined in part by the direct inspection of paradigms, a process that is often aided by but does not depend upon the formulation of rules and assumptions. Indeed, the existence of a paradigm need not even imply that any full set of rules exists.

CAT/2007(RC)

Question. 74

What is the author attempting to illustrate through this passage?

Comprehension

Directions for the question: Read the passage carefully and answer the given questions accordingly.

To discover the relation between rules, paradigms, and normal science, consider first how the historian isolates the particular loci of commitment that have been described as accepted rules. Close historical investigation of a given specialty at a given time discloses a set of recurrent and quasi-standard illustrations of various theories in their conceptual, observational, and instrumental applications. These are the community’s paradigms, revealed in its textbooks, lectures, and laboratory exercises. By studying them and by practicing with them, the members of the corresponding community learn their trade. The historian, of course, will discover in addition a penumbral area occupied by achievements whose status is still in doubt, but the core of solved problems and techniques will usually be clear. Despite occasional ambiguities, the paradigms of a mature scientific community can be determined with relative ease.

That demands a second step and one of a somewhat different kind. When undertaking it, the historian must compare the community’s paradigms with each other and with its current research reports. In doing so, his object is to discover what isolable elements, explicit or implicit, the members of that community may have abstracted from their more global paradigms and deploy it as rules in their research. Anyone who has attempted to describe or analyze the evolution of a particular scientific tradition will necessarily have sought accepted principles and rules of this sort. Almost certainly, he will have met with at least partial success. But, if his experience has been at all like my own, he will have found the search for rules both more difficult and less satisfying than the search for paradigms. Some of the generalizations he employs to describe the community’s shared beliefs will present more problems. Others, however, will seem a shade too strong. Phrased in just that way, or in any other way he can imagine, they would almost certainly have been rejected by some members of the group he studies. Nevertheless, if the coherence of the research tradition is to be understood in terms of rules, some specification of common ground in the corresponding area is needed. As a result, the search for a body of rules competent to constitute a given normal research tradition becomes a source of continual and deep frustration.

Recognizing that frustration, however, makes it possible to diagnose its source. Scientists can agree that Newton, Lavoisier, Maxwell, or Einstein has produced an apparently permanent solution to a group of outstanding problems and still disagree, sometimes without being aware of it, about the particular abstract characteristics that make those solutions permanent. They can, that is, agree in their identification of a paradigm without agreeing on, or even attempting to produce, a full interpretation or rationalization of it. Lack of a standard interpretation or of an agreed reduction to rules will not prevent a paradigm from guiding research. Normal science can be determined in part by the direct inspection of paradigms, a process that is often aided by but does not depend upon the formulation of rules and assumptions. Indeed, the existence of a paradigm need not even imply that any full set of rules exists.

CAT/2007(RC)

Question. 75

The term ‘loci of commitment’ as used in the passage would most likely correspond with which of the following?

Comprehension

Directions for the question: Read the passage carefully and answer the given questions accordingly.

To discover the relation between rules, paradigms, and normal science, consider first how the historian isolates the particular loci of commitment that have been described as accepted rules. Close historical investigation of a given specialty at a given time discloses a set of recurrent and quasi-standard illustrations of various theories in their conceptual, observational, and instrumental applications. These are the community’s paradigms, revealed in its textbooks, lectures, and laboratory exercises. By studying them and by practicing with them, the members of the corresponding community learn their trade. The historian, of course, will discover in addition a penumbral area occupied by achievements whose status is still in doubt, but the core of solved problems and techniques will usually be clear. Despite occasional ambiguities, the paradigms of a mature scientific community can be determined with relative ease.

That demands a second step and one of a somewhat different kind. When undertaking it, the historian must compare the community’s paradigms with each other and with its current research reports. In doing so, his object is to discover what isolable elements, explicit or implicit, the members of that community may have abstracted from their more global paradigms and deploy it as rules in their research. Anyone who has attempted to describe or analyze the evolution of a particular scientific tradition will necessarily have sought accepted principles and rules of this sort. Almost certainly, he will have met with at least partial success. But, if his experience has been at all like my own, he will have found the search for rules both more difficult and less satisfying than the search for paradigms. Some of the generalizations he employs to describe the community’s shared beliefs will present more problems. Others, however, will seem a shade too strong. Phrased in just that way, or in any other way he can imagine, they would almost certainly have been rejected by some members of the group he studies. Nevertheless, if the coherence of the research tradition is to be understood in terms of rules, some specification of common ground in the corresponding area is needed. As a result, the search for a body of rules competent to constitute a given normal research tradition becomes a source of continual and deep frustration.

Recognizing that frustration, however, makes it possible to diagnose its source. Scientists can agree that Newton, Lavoisier, Maxwell, or Einstein has produced an apparently permanent solution to a group of outstanding problems and still disagree, sometimes without being aware of it, about the particular abstract characteristics that make those solutions permanent. They can, that is, agree in their identification of a paradigm without agreeing on, or even attempting to produce, a full interpretation or rationalization of it. Lack of a standard interpretation or of an agreed reduction to rules will not prevent a paradigm from guiding research. Normal science can be determined in part by the direct inspection of paradigms, a process that is often aided by but does not depend upon the formulation of rules and assumptions. Indeed, the existence of a paradigm need not even imply that any full set of rules exists.

CAT/2007(RC)

Question. 76

The author of this passage is likely to agree with which of the following?

Comprehension

Directions for the question: Read the passage carefully and answer the given questions accordingly.

Recently I spent several hours sitting under a tree in my garden with the social anthropologist William Ury, a Harvard University professor who specializes in the art of negotiation, and wrote the bestselling book, Getting to Yes. He captivated me with his theory that tribalism protects people from their fear of rapid change. He explained that the pillars of tribalism that humans rely on for security would always counter any significant cultural or social change. In this way, he said, change is never allowed to happen too fast. Technology, for example, is a pillar of society. Ury believes that every time technology moves in a new or radical direction, another pillar such as religion or nationalism will grow stronger - in effect, the traditional and familiar will assume greater importance to compensate for the new and untested. In this manner, human tribes avoid rapid change that leaves people insecure and frightened.

But We have all heard that nothing is as permanent as change. Nothing is guaranteed. Pithy expressions, to be sure, but no more than cliches. As Ury says, people don’t live that way from day to day. On the contrary, they actively seek certainly and stability. They want to know they will be safe.

Even so, we scare ourselves constantly with the idea of change. An IBM CEO once said: ‘We only re-structure for a good reason, and if we haven’t re-structured in a while, that’s a good reason.’ We are scared that competitors, technology and the consumer will put us out of business – so we have to change all the time just to stay alive. But if we asked our fathers and grandfathers, would they have said that they lived in a period of little change? The structure may not have changed much. It may just be the speed with which we do things.

Change is over-rated, anyway. Consider the automobile. It’s an especially valuable example because the auto industry has spent tens of billions of dollars on research and product development in the last 100 years. Henry Ford’s first car had a metal chassis with internal combustion, a gasoline-powered engine, four wheels with rubber tyres, a foot operated clutch assembly and brake system, a steering wheel, and four seats, and it could safely do 18 miles per hour. A hundred years and tens of thousands of research hours later, we drive cars with a metal chassis with internal combustion, gasoline-powered engine, four wheels with rubber tyres, a foot-operated clutch assembly and brake system, a steering wheel, four seats – and the average speed in London in 2001 was 17.5 miles per hour!

That’s not a hell of a lot of return for the money. For evidently doesn’t have much to teach us about change. The fact that they’re still manufacturing cars is not proof that Ford Motor Co. is a sound organization, just proof that it takes very large companies to make cars in great quantities – makes for an almost impregnable entry barrier.

Fifty years after the development of the jet engine, planes are also little changed. They’ve grown bigger, wider and can carry more people. But those are incremental, largely cosmetic changes.

Taken together, this lack of real change has come to mean that in travel – whether driving or flying – time and technology have not combined to make things much better. The safety and design have of course accompanied the times and the new volume of cars and flights, but nothing of any significance has changed in the basic assumptions of the final product.

At the same time, moving around in cars or aeroplanes becomes less and less efficient all the time. Not only has there been no great change, but also both forms of transport have deteriorated as more people clamour to use them. The same is true for telephones, which took over hundred years to become a mobile, or photographic film, which also required an entire century to change.

The only explanation for this is anthropological. Once established in calcified organizations, humans do two things: sabotage changes that might render people dispensable, and ensure industry-wide emulation. In the 1960s, German auto companies developed plans to scrap the entire combustion engine for an electrical design (the same existed in the 1970s in Japan, and in the 1980s in France). So for 40 years, we might have been free of the wasteful and ludicrous dependence on fossil fuels. Why didn’t it go anywhere? Because auto executives understood pistons and carburettors and would be loath to cannibalize their expertise, along with most of their factories.

CAT/2004(RC)

Question. 77

According to the passage, which of the following statements is true?

Comprehension

Directions for the question: Read the passage carefully and answer the given questions accordingly.

Recently I spent several hours sitting under a tree in my garden with the social anthropologist William Ury, a Harvard University professor who specializes in the art of negotiation, and wrote the bestselling book, Getting to Yes. He captivated me with his theory that tribalism protects people from their fear of rapid change. He explained that the pillars of tribalism that humans rely on for security would always counter any significant cultural or social change. In this way, he said, change is never allowed to happen too fast. Technology, for example, is a pillar of society. Ury believes that every time technology moves in a new or radical direction, another pillar such as religion or nationalism will grow stronger - in effect, the traditional and familiar will assume greater importance to compensate for the new and untested. In this manner, human tribes avoid rapid change that leaves people insecure and frightened.

But We have all heard that nothing is as permanent as change. Nothing is guaranteed. Pithy expressions, to be sure, but no more than cliches. As Ury says, people don’t live that way from day to day. On the contrary, they actively seek certainly and stability. They want to know they will be safe.

Even so, we scare ourselves constantly with the idea of change. An IBM CEO once said: ‘We only re-structure for a good reason, and if we haven’t re-structured in a while, that’s a good reason.’ We are scared that competitors, technology and the consumer will put us out of business – so we have to change all the time just to stay alive. But if we asked our fathers and grandfathers, would they have said that they lived in a period of little change? The structure may not have changed much. It may just be the speed with which we do things.

Change is over-rated, anyway. Consider the automobile. It’s an especially valuable example because the auto industry has spent tens of billions of dollars on research and product development in the last 100 years. Henry Ford’s first car had a metal chassis with internal combustion, a gasoline-powered engine, four wheels with rubber tyres, a foot operated clutch assembly and brake system, a steering wheel, and four seats, and it could safely do 18 miles per hour. A hundred years and tens of thousands of research hours later, we drive cars with a metal chassis with internal combustion, gasoline-powered engine, four wheels with rubber tyres, a foot-operated clutch assembly and brake system, a steering wheel, four seats – and the average speed in London in 2001 was 17.5 miles per hour!

That’s not a hell of a lot of return for the money. For evidently doesn’t have much to teach us about change. The fact that they’re still manufacturing cars is not proof that Ford Motor Co. is a sound organization, just proof that it takes very large companies to make cars in great quantities – makes for an almost impregnable entry barrier.

Fifty years after the development of the jet engine, planes are also little changed. They’ve grown bigger, wider and can carry more people. But those are incremental, largely cosmetic changes.

Taken together, this lack of real change has come to mean that in travel – whether driving or flying – time and technology have not combined to make things much better. The safety and design have of course accompanied the times and the new volume of cars and flights, but nothing of any significance has changed in the basic assumptions of the final product.

At the same time, moving around in cars or aeroplanes becomes less and less efficient all the time. Not only has there been no great change, but also both forms of transport have deteriorated as more people clamour to use them. The same is true for telephones, which took over hundred years to become a mobile, or photographic film, which also required an entire century to change.

The only explanation for this is anthropological. Once established in calcified organizations, humans do two things: sabotage changes that might render people dispensable, and ensure industry-wide emulation. In the 1960s, German auto companies developed plans to scrap the entire combustion engine for an electrical design (the same existed in the 1970s in Japan, and in the 1980s in France). So for 40 years, we might have been free of the wasteful and ludicrous dependence on fossil fuels. Why didn’t it go anywhere? Because auto executives understood pistons and carburettors and would be loath to cannibalize their expertise, along with most of their factories.

CAT/2004(RC)

Question. 78

Which of the following views does the author fully support in the passage?

Comprehension

Directions for the question: Read the passage carefully and answer the given questions accordingly.

Recently I spent several hours sitting under a tree in my garden with the social anthropologist William Ury, a Harvard University professor who specializes in the art of negotiation, and wrote the bestselling book, Getting to Yes. He captivated me with his theory that tribalism protects people from their fear of rapid change. He explained that the pillars of tribalism that humans rely on for security would always counter any significant cultural or social change. In this way, he said, change is never allowed to happen too fast. Technology, for example, is a pillar of society. Ury believes that every time technology moves in a new or radical direction, another pillar such as religion or nationalism will grow stronger - in effect, the traditional and familiar will assume greater importance to compensate for the new and untested. In this manner, human tribes avoid rapid change that leaves people insecure and frightened.

But We have all heard that nothing is as permanent as change. Nothing is guaranteed. Pithy expressions, to be sure, but no more than cliches. As Ury says, people don’t live that way from day to day. On the contrary, they actively seek certainly and stability. They want to know they will be safe.

Even so, we scare ourselves constantly with the idea of change. An IBM CEO once said: ‘We only re-structure for a good reason, and if we haven’t re-structured in a while, that’s a good reason.’ We are scared that competitors, technology and the consumer will put us out of business – so we have to change all the time just to stay alive. But if we asked our fathers and grandfathers, would they have said that they lived in a period of little change? The structure may not have changed much. It may just be the speed with which we do things.

Change is over-rated, anyway. Consider the automobile. It’s an especially valuable example because the auto industry has spent tens of billions of dollars on research and product development in the last 100 years. Henry Ford’s first car had a metal chassis with internal combustion, a gasoline-powered engine, four wheels with rubber tyres, a foot operated clutch assembly and brake system, a steering wheel, and four seats, and it could safely do 18 miles per hour. A hundred years and tens of thousands of research hours later, we drive cars with a metal chassis with internal combustion, gasoline-powered engine, four wheels with rubber tyres, a foot-operated clutch assembly and brake system, a steering wheel, four seats – and the average speed in London in 2001 was 17.5 miles per hour!

That’s not a hell of a lot of return for the money. For evidently doesn’t have much to teach us about change. The fact that they’re still manufacturing cars is not proof that Ford Motor Co. is a sound organization, just proof that it takes very large companies to make cars in great quantities – makes for an almost impregnable entry barrier.

Fifty years after the development of the jet engine, planes are also little changed. They’ve grown bigger, wider and can carry more people. But those are incremental, largely cosmetic changes.

Taken together, this lack of real change has come to mean that in travel – whether driving or flying – time and technology have not combined to make things much better. The safety and design have of course accompanied the times and the new volume of cars and flights, but nothing of any significance has changed in the basic assumptions of the final product.

At the same time, moving around in cars or aeroplanes becomes less and less efficient all the time. Not only has there been no great change, but also both forms of transport have deteriorated as more people clamour to use them. The same is true for telephones, which took over hundred years to become a mobile, or photographic film, which also required an entire century to change.

The only explanation for this is anthropological. Once established in calcified organizations, humans do two things: sabotage changes that might render people dispensable, and ensure industry-wide emulation. In the 1960s, German auto companies developed plans to scrap the entire combustion engine for an electrical design (the same existed in the 1970s in Japan, and in the 1980s in France). So for 40 years, we might have been free of the wasteful and ludicrous dependence on fossil fuels. Why didn’t it go anywhere? Because auto executives understood pistons and carburettors and would be loath to cannibalize their expertise, along with most of their factories.

CAT/2004(RC)

Question. 79

Which of the following best describes one of the main ideas discussed in the passage?

Comprehension

Directions for the question: Read the passage carefully and answer the given questions accordingly.

Recently I spent several hours sitting under a tree in my garden with the social anthropologist William Ury, a Harvard University professor who specializes in the art of negotiation, and wrote the bestselling book, Getting to Yes. He captivated me with his theory that tribalism protects people from their fear of rapid change. He explained that the pillars of tribalism that humans rely on for security would always counter any significant cultural or social change. In this way, he said, change is never allowed to happen too fast. Technology, for example, is a pillar of society. Ury believes that every time technology moves in a new or radical direction, another pillar such as religion or nationalism will grow stronger - in effect, the traditional and familiar will assume greater importance to compensate for the new and untested. In this manner, human tribes avoid rapid change that leaves people insecure and frightened.

But We have all heard that nothing is as permanent as change. Nothing is guaranteed. Pithy expressions, to be sure, but no more than cliches. As Ury says, people don’t live that way from day to day. On the contrary, they actively seek certainly and stability. They want to know they will be safe.

Even so, we scare ourselves constantly with the idea of change. An IBM CEO once said: ‘We only re-structure for a good reason, and if we haven’t re-structured in a while, that’s a good reason.’ We are scared that competitors, technology and the consumer will put us out of business – so we have to change all the time just to stay alive. But if we asked our fathers and grandfathers, would they have said that they lived in a period of little change? The structure may not have changed much. It may just be the speed with which we do things.

Change is over-rated, anyway. Consider the automobile. It’s an especially valuable example because the auto industry has spent tens of billions of dollars on research and product development in the last 100 years. Henry Ford’s first car had a metal chassis with internal combustion, a gasoline-powered engine, four wheels with rubber tyres, a foot operated clutch assembly and brake system, a steering wheel, and four seats, and it could safely do 18 miles per hour. A hundred years and tens of thousands of research hours later, we drive cars with a metal chassis with internal combustion, gasoline-powered engine, four wheels with rubber tyres, a foot-operated clutch assembly and brake system, a steering wheel, four seats – and the average speed in London in 2001 was 17.5 miles per hour!

That’s not a hell of a lot of return for the money. For evidently doesn’t have much to teach us about change. The fact that they’re still manufacturing cars is not proof that Ford Motor Co. is a sound organization, just proof that it takes very large companies to make cars in great quantities – makes for an almost impregnable entry barrier.

Fifty years after the development of the jet engine, planes are also little changed. They’ve grown bigger, wider and can carry more people. But those are incremental, largely cosmetic changes.

Taken together, this lack of real change has come to mean that in travel – whether driving or flying – time and technology have not combined to make things much better. The safety and design have of course accompanied the times and the new volume of cars and flights, but nothing of any significance has changed in the basic assumptions of the final product.

At the same time, moving around in cars or aeroplanes becomes less and less efficient all the time. Not only has there been no great change, but also both forms of transport have deteriorated as more people clamour to use them. The same is true for telephones, which took over hundred years to become a mobile, or photographic film, which also required an entire century to change.

The only explanation for this is anthropological. Once established in calcified organizations, humans do two things: sabotage changes that might render people dispensable, and ensure industry-wide emulation. In the 1960s, German auto companies developed plans to scrap the entire combustion engine for an electrical design (the same existed in the 1970s in Japan, and in the 1980s in France). So for 40 years, we might have been free of the wasteful and ludicrous dependence on fossil fuels. Why didn’t it go anywhere? Because auto executives understood pistons and carburettors and would be loath to cannibalize their expertise, along with most of their factories.

CAT/2004(RC)

Question. 80

According to the passage, the reason why we continued to be dependent on fossil fuels is that:

Comprehension

Directions for the question: Read the passage carefully and answer the given questions accordingly.

Throughout human history, the leading causes of death have been infection and trauma. Modern medicine has scored significant victories against both, and the major causes of ill health are now chronic degenerative diseases, such as coronary artery disease, arthritis, osteoporosis, Alzheimer’s, macular degeneration, cataract, and cancer. These have a long latency period before symptoms appear and a diagnosis is made. It follows that the majority of apparently healthy people are pre-ill.

But are these conditions inevitably degenerative? A truly preventive medicine that focused on the pre-ill, analyzing the metabolic errors which lead to clinical illness, might be able to correct them before the first symptom. Genetic risk factors are known for all chronic degenerative diseases and are important to the individuals who possess them. at the population level, however, migration studies confirm that these illnesses are linked for the most part to lifestyle factors --- exercise, smoking, and nutrition. Nutrition is the easiest of these to change, and the most versatile tool for affecting the metabolic changes needed to tilt the balance away from disease.

Many national surveys reveal that malnutrition is common in developed countries. This is not the calorie and/or micronutrient deficiency associated with developing nations (Type A malnutrition); but multiple micronutrient depletion, usually combined with calorific balance or excess (Type B malnutrition). The incidence and severity of Type B malnutrition will be shown to be worse if newer micronutrient groups such as the essential fatty acids, xanthophylls, and flavonoids are included in the surveys. Commonly ingested levels of these micronutrients seem to be far too low in many developed countries.

There is now considerable evidence that Type B malnutrition is a major cause of chronic degenerative diseases. If this is the case, then it is logical to treat such diseases not with drugs but with multiple micronutrient depletion, or pharmaco-nutrition’. This can take the form of pills and capsules --- ‘nutraceuticals’, or food formats known as ‘functional foods’. This approach has been neglected hitherto because it is relatively unprofitable for drug companies --- the products are hard to patent --- and it is a strategy that does not sit easily with modern medical interventionism. Over the last 100 years, the drug industry has invested huge sums in developing a range of subtle and powerful drugs to treat the many diseases we are subject to. Medical training is couched in pharmaceutical terms and this approach has provided us with an exceptional range of therapeutic tools in the treatment of disease and in acute medical emergencies. However, the pharmaceutical model has also created an unhealthy dependency culture, in which relatively few of us accept responsibility for maintaining our own health. Instead, we have handed over this responsibility to health professionals who know very little about health maintenance, or disease prevention.

One problem for supporters of this argument is the lack of the right kind of hard evidence. We have a wealth of epidemiological data linking disease risks, and a great deal of information on mechanism: how food factors interact with our biochemistry. But almost all international studies with micronutrients, with the notable exception of the omega 3 fatty acids, have so far produced conflicting or negative results. In other words, our science appears to have no predictive value. Does this invalidate the science? Or are we simply asking the wrong questions?

Based on pharmaceutical thinking, most intervention studies have attempted to measure the impact of a single micronutrient on the incidence of disease. The classical approach says that if you give a compound formula to test subjects and obtain positive results, you cannot know which ingredient is exerting the benefit, so you must test each ingredient individually. But in the field of nutrition, this does not work. Each intervention on its own will hardly make enough difference to be measured. The best therapeutic response must therefore combine micronutrients to normalize our internal physiology. So do we need to analyze each individual’s nutritional status and then tailor a formula specifically for him or her? While we do not have the resources to analyze millions of individual cases, there is no need to do so. The vast majority of people are consuming suboptimal amounts of most micronutrients, and most of the micronutrients concerned are very safe. Accordingly, a comprehensive and universal program of micronutrient support is probably the most cost-effective and safest way of improving the general health of the nation.

CAT/2004(RC)

Question. 81

Type-B malnutrition is a serious concern in developed countries because

Comprehension

Directions for the question: Read the passage carefully and answer the given questions accordingly.

Throughout human history, the leading causes of death have been infection and trauma. Modern medicine has scored significant victories against both, and the major causes of ill health are now chronic degenerative diseases, such as coronary artery disease, arthritis, osteoporosis, Alzheimer’s, macular degeneration, cataract, and cancer. These have a long latency period before symptoms appear and a diagnosis is made. It follows that the majority of apparently healthy people are pre-ill.

But are these conditions inevitably degenerative? A truly preventive medicine that focused on the pre-ill, analyzing the metabolic errors which lead to clinical illness, might be able to correct them before the first symptom. Genetic risk factors are known for all chronic degenerative diseases and are important to the individuals who possess them. at the population level, however, migration studies confirm that these illnesses are linked for the most part to lifestyle factors --- exercise, smoking, and nutrition. Nutrition is the easiest of these to change, and the most versatile tool for affecting the metabolic changes needed to tilt the balance away from disease.

Many national surveys reveal that malnutrition is common in developed countries. This is not the calorie and/or micronutrient deficiency associated with developing nations (Type A malnutrition); but multiple micronutrient depletion, usually combined with calorific balance or excess (Type B malnutrition). The incidence and severity of Type B malnutrition will be shown to be worse if newer micronutrient groups such as the essential fatty acids, xanthophylls, and flavonoids are included in the surveys. Commonly ingested levels of these micronutrients seem to be far too low in many developed countries.

There is now considerable evidence that Type B malnutrition is a major cause of chronic degenerative diseases. If this is the case, then it is logical to treat such diseases not with drugs but with multiple micronutrient depletion, or pharmaco-nutrition’. This can take the form of pills and capsules --- ‘nutraceuticals’, or food formats known as ‘functional foods’. This approach has been neglected hitherto because it is relatively unprofitable for drug companies --- the products are hard to patent --- and it is a strategy that does not sit easily with modern medical interventionism. Over the last 100 years, the drug industry has invested huge sums in developing a range of subtle and powerful drugs to treat the many diseases we are subject to. Medical training is couched in pharmaceutical terms and this approach has provided us with an exceptional range of therapeutic tools in the treatment of disease and in acute medical emergencies. However, the pharmaceutical model has also created an unhealthy dependency culture, in which relatively few of us accept responsibility for maintaining our own health. Instead, we have handed over this responsibility to health professionals who know very little about health maintenance, or disease prevention.

One problem for supporters of this argument is the lack of the right kind of hard evidence. We have a wealth of epidemiological data linking disease risks, and a great deal of information on mechanism: how food factors interact with our biochemistry. But almost all international studies with micronutrients, with the notable exception of the omega 3 fatty acids, have so far produced conflicting or negative results. In other words, our science appears to have no predictive value. Does this invalidate the science? Or are we simply asking the wrong questions?

Based on pharmaceutical thinking, most intervention studies have attempted to measure the impact of a single micronutrient on the incidence of disease. The classical approach says that if you give a compound formula to test subjects and obtain positive results, you cannot know which ingredient is exerting the benefit, so you must test each ingredient individually. But in the field of nutrition, this does not work. Each intervention on its own will hardly make enough difference to be measured. The best therapeutic response must therefore combine micronutrients to normalize our internal physiology. So do we need to analyze each individual’s nutritional status and then tailor a formula specifically for him or her? While we do not have the resources to analyze millions of individual cases, there is no need to do so. The vast majority of people are consuming suboptimal amounts of most micronutrients, and most of the micronutrients concerned are very safe. Accordingly, a comprehensive and universal program of micronutrient support is probably the most cost-effective and safest way of improving the general health of the nation.

CAT/2004(RC)

Question. 82

Why are a large number of apparently healthy people deemed pre-ill?

Comprehension

Directions for the question: Read the passage carefully and answer the given questions accordingly.

Throughout human history, the leading causes of death have been infection and trauma. Modern medicine has scored significant victories against both, and the major causes of ill health are now chronic degenerative diseases, such as coronary artery disease, arthritis, osteoporosis, Alzheimer’s, macular degeneration, cataract, and cancer. These have a long latency period before symptoms appear and a diagnosis is made. It follows that the majority of apparently healthy people are pre-ill.

But are these conditions inevitably degenerative? A truly preventive medicine that focused on the pre-ill, analyzing the metabolic errors which lead to clinical illness, might be able to correct them before the first symptom. Genetic risk factors are known for all chronic degenerative diseases and are important to the individuals who possess them. at the population level, however, migration studies confirm that these illnesses are linked for the most part to lifestyle factors --- exercise, smoking, and nutrition. Nutrition is the easiest of these to change, and the most versatile tool for affecting the metabolic changes needed to tilt the balance away from disease.

Many national surveys reveal that malnutrition is common in developed countries. This is not the calorie and/or micronutrient deficiency associated with developing nations (Type A malnutrition); but multiple micronutrient depletion, usually combined with calorific balance or excess (Type B malnutrition). The incidence and severity of Type B malnutrition will be shown to be worse if newer micronutrient groups such as the essential fatty acids, xanthophylls, and flavonoids are included in the surveys. Commonly ingested levels of these micronutrients seem to be far too low in many developed countries.

There is now considerable evidence that Type B malnutrition is a major cause of chronic degenerative diseases. If this is the case, then it is logical to treat such diseases not with drugs but with multiple micronutrient depletion, or pharmaco-nutrition’. This can take the form of pills and capsules --- ‘nutraceuticals’, or food formats known as ‘functional foods’. This approach has been neglected hitherto because it is relatively unprofitable for drug companies --- the products are hard to patent --- and it is a strategy that does not sit easily with modern medical interventionism. Over the last 100 years, the drug industry has invested huge sums in developing a range of subtle and powerful drugs to treat the many diseases we are subject to. Medical training is couched in pharmaceutical terms and this approach has provided us with an exceptional range of therapeutic tools in the treatment of disease and in acute medical emergencies. However, the pharmaceutical model has also created an unhealthy dependency culture, in which relatively few of us accept responsibility for maintaining our own health. Instead, we have handed over this responsibility to health professionals who know very little about health maintenance, or disease prevention.

One problem for supporters of this argument is the lack of the right kind of hard evidence. We have a wealth of epidemiological data linking disease risks, and a great deal of information on mechanism: how food factors interact with our biochemistry. But almost all international studies with micronutrients, with the notable exception of the omega 3 fatty acids, have so far produced conflicting or negative results. In other words, our science appears to have no predictive value. Does this invalidate the science? Or are we simply asking the wrong questions?

Based on pharmaceutical thinking, most intervention studies have attempted to measure the impact of a single micronutrient on the incidence of disease. The classical approach says that if you give a compound formula to test subjects and obtain positive results, you cannot know which ingredient is exerting the benefit, so you must test each ingredient individually. But in the field of nutrition, this does not work. Each intervention on its own will hardly make enough difference to be measured. The best therapeutic response must therefore combine micronutrients to normalize our internal physiology. So do we need to analyze each individual’s nutritional status and then tailor a formula specifically for him or her? While we do not have the resources to analyze millions of individual cases, there is no need to do so. The vast majority of people are consuming suboptimal amounts of most micronutrients, and most of the micronutrients concerned are very safe. Accordingly, a comprehensive and universal program of micronutrient support is probably the most cost-effective and safest way of improving the general health of the nation.

CAT/2004(RC)

Question. 83

The author recommends micronutrient-repletion for large-scale treatment of chronic degenerative disease because

Comprehension

Directions for the question: Read the passage carefully and answer the given questions accordingly.

Throughout human history, the leading causes of death have been infection and trauma. Modern medicine has scored significant victories against both, and the major causes of ill health are now chronic degenerative diseases, such as coronary artery disease, arthritis, osteoporosis, Alzheimer’s, macular degeneration, cataract, and cancer. These have a long latency period before symptoms appear and a diagnosis is made. It follows that the majority of apparently healthy people are pre-ill.

But are these conditions inevitably degenerative? A truly preventive medicine that focused on the pre-ill, analyzing the metabolic errors which lead to clinical illness, might be able to correct them before the first symptom. Genetic risk factors are known for all chronic degenerative diseases and are important to the individuals who possess them. at the population level, however, migration studies confirm that these illnesses are linked for the most part to lifestyle factors --- exercise, smoking, and nutrition. Nutrition is the easiest of these to change, and the most versatile tool for affecting the metabolic changes needed to tilt the balance away from disease.

Many national surveys reveal that malnutrition is common in developed countries. This is not the calorie and/or micronutrient deficiency associated with developing nations (Type A malnutrition); but multiple micronutrient depletion, usually combined with calorific balance or excess (Type B malnutrition). The incidence and severity of Type B malnutrition will be shown to be worse if newer micronutrient groups such as the essential fatty acids, xanthophylls, and flavonoids are included in the surveys. Commonly ingested levels of these micronutrients seem to be far too low in many developed countries.

There is now considerable evidence that Type B malnutrition is a major cause of chronic degenerative diseases. If this is the case, then it is logical to treat such diseases not with drugs but with multiple micronutrient depletion, or pharmaco-nutrition’. This can take the form of pills and capsules --- ‘nutraceuticals’, or food formats known as ‘functional foods’. This approach has been neglected hitherto because it is relatively unprofitable for drug companies --- the products are hard to patent --- and it is a strategy that does not sit easily with modern medical interventionism. Over the last 100 years, the drug industry has invested huge sums in developing a range of subtle and powerful drugs to treat the many diseases we are subject to. Medical training is couched in pharmaceutical terms and this approach has provided us with an exceptional range of therapeutic tools in the treatment of disease and in acute medical emergencies. However, the pharmaceutical model has also created an unhealthy dependency culture, in which relatively few of us accept responsibility for maintaining our own health. Instead, we have handed over this responsibility to health professionals who know very little about health maintenance, or disease prevention.

One problem for supporters of this argument is the lack of the right kind of hard evidence. We have a wealth of epidemiological data linking disease risks, and a great deal of information on mechanism: how food factors interact with our biochemistry. But almost all international studies with micronutrients, with the notable exception of the omega 3 fatty acids, have so far produced conflicting or negative results. In other words, our science appears to have no predictive value. Does this invalidate the science? Or are we simply asking the wrong questions?

Based on pharmaceutical thinking, most intervention studies have attempted to measure the impact of a single micronutrient on the incidence of disease. The classical approach says that if you give a compound formula to test subjects and obtain positive results, you cannot know which ingredient is exerting the benefit, so you must test each ingredient individually. But in the field of nutrition, this does not work. Each intervention on its own will hardly make enough difference to be measured. The best therapeutic response must therefore combine micronutrients to normalize our internal physiology. So do we need to analyze each individual’s nutritional status and then tailor a formula specifically for him or her? While we do not have the resources to analyze millions of individual cases, there is no need to do so. The vast majority of people are consuming suboptimal amounts of most micronutrients, and most of the micronutrients concerned are very safe. Accordingly, a comprehensive and universal program of micronutrient support is probably the most cost-effective and safest way of improving the general health of the nation.

CAT/2004(RC)

Question. 84

Tailoring micronutrient-based treatment plans to suit individual deficiency profiles is not necessary because

Comprehension

Directions for the question: Read the passage carefully and answer the given questions accordingly.

Fifty feet away three male lions lay by the road. They didn’t appear to have hair on their heads. Noting the color of their noses (leonine noses darken as they age, from pink to black), Craig estimated that they were six years old–young adults. “This is wonderful!” he said, after staring at them for several moments. “This is what we came to see. They really are maneless.” Craig, a professor at the University of Minnesota is arguably the leading expert on the majestic Serengeti lion, whose head is mantled in long, thick hair. Hair and Peyton West, a doctoral student who has been working with him in Tanzania, had never seen the Tsavo lions that live some 200 miles east of the Serengeti. This scientist had partly suspected that the maneless males were adolescents mistaken for adults by amateur observers. Now they knew better.

The Tsavo research expedition was mostly Peyton’s show. She had spent several years in Tanzania, compiling the data she needed to answer a question that ought to have been answered long ago: why do lions have manes? It’s the only cat, wild or domestic that displays such ornamentation. In Tsavo, she was attacking the riddle from the opposite angle. Why do its lions not have manes? (Some “maneless” lions in Tsavo East do have partial manes, but they rarely attain the regal glory of the Serengeti lions’.) Does environmental adaptation account for the trait? Are the lions of Tsavo, as some people believe, a distinct subspecies of their Serengeti cousins?

The Serengeti lions have been under continuous observation for more than 35 years, beginning with George Schaller’s pioneering work in the 1960s. But the lions in Tsavo, Kenya’s oldest and largest protected ecosystem, have hardly been studied. Consequently, legends have grown up around them. Not only do they look different, according to the myths, but they also behave differently, displaying greater cunning and aggressiveness. “Remember too,” Kenya: The Rough Guide warns, “Tsavo’s lions have a reputation of ferocity.” Their fearsome image became well-known in 1898 when two males stalled construction of what is now Kenya Railways by allegedly killing and eating 135 Indian and African laborers. A British Army officer in charge of building a railroad bridge over the Tsavo River, Lt. Col. J. H. Patterson, spent nine months pursuing the pair before he brought them to the bay and killed them. Stuffed and mounted, they now glare at visitors to the Fields Museum in Chicago. Patterson’s account of the lionine, The Man-Eaters of Tsavo, was an international best-seller when published in 1907. Still, in print, the book has made Tsavo’s lions notorious. That annoys some scientists. “People don’t want to give up on mythology,” Dennis King told me one day. The zoologist has been working in Tsavo off and on for four years. “ I am so sick of this man-eater business. Patterson made a helluva lot of money off that story, but Tsavo’s lions are no more likely to turn man-eater than lions from elsewhere.”

But tales of their savagery and wiliness don’t all come from sensationalist authors looking to make a buck. Tsavo lions are generally larger than lions elsewhere, enabling them to take down the predominant prey animal in Tsavo, the Cape buffalo, one the strongest, most aggressive animals of Earth. The buffalo don’t give up easily: They often kill or severely injure an attacking lion, and a wounded lion might be more likely to turn to cattle and humans for food.

And other prey is less abundant in Tsavo than in other traditional lion haunts. A hungry lion is more likely to attack humans. Safari guides and Kenya Wildlife Service rangers tell of lions attacking Land Rovers, raiding camps, stalking tourists. Tsavo is a tough neighborhood, they say, and it breeds tougher lions.

But are they really tougher? And if so, is there any connection between their manelessness and their ferocity? An intriguing hypothesis was advanced two years ago by Gnoske and Peterhans: Tsavo lions may be similar to the unmanned cave lions of the Pleistocene. The Serengeti variety is among the most evolved of the species –the latest model, so to speak–while certain morphological differences in Tsavo lions (bigger bodies, smaller skulls, and maybe even lack a mane) suggest that they are closer to the primitive ancestor of all lions. Craig and Peyton had serious doubts about this idea but admitted that Tsavo lions pose a mystery to science.

CAT/2004(RC)

Question. 85

The book Man-Eaters of Tsavo annoys some scientists because

Comprehension

Directions for the question: Read the passage carefully and answer the given questions accordingly.

Fifty feet away three male lions lay by the road. They didn’t appear to have hair on their heads. Noting the color of their noses (leonine noses darken as they age, from pink to black), Craig estimated that they were six years old–young adults. “This is wonderful!” he said, after staring at them for several moments. “This is what we came to see. They really are maneless.” Craig, a professor at the University of Minnesota is arguably the leading expert on the majestic Serengeti lion, whose head is mantled in long, thick hair. Hair and Peyton West, a doctoral student who has been working with him in Tanzania, had never seen the Tsavo lions that live some 200 miles east of the Serengeti. This scientist had partly suspected that the maneless males were adolescents mistaken for adults by amateur observers. Now they knew better.

The Tsavo research expedition was mostly Peyton’s show. She had spent several years in Tanzania, compiling the data she needed to answer a question that ought to have been answered long ago: why do lions have manes? It’s the only cat, wild or domestic that displays such ornamentation. In Tsavo, she was attacking the riddle from the opposite angle. Why do its lions not have manes? (Some “maneless” lions in Tsavo East do have partial manes, but they rarely attain the regal glory of the Serengeti lions’.) Does environmental adaptation account for the trait? Are the lions of Tsavo, as some people believe, a distinct subspecies of their Serengeti cousins?

The Serengeti lions have been under continuous observation for more than 35 years, beginning with George Schaller’s pioneering work in the 1960s. But the lions in Tsavo, Kenya’s oldest and largest protected ecosystem, have hardly been studied. Consequently, legends have grown up around them. Not only do they look different, according to the myths, but they also behave differently, displaying greater cunning and aggressiveness. “Remember too,” Kenya: The Rough Guide warns, “Tsavo’s lions have a reputation of ferocity.” Their fearsome image became well-known in 1898 when two males stalled construction of what is now Kenya Railways by allegedly killing and eating 135 Indian and African laborers. A British Army officer in charge of building a railroad bridge over the Tsavo River, Lt. Col. J. H. Patterson, spent nine months pursuing the pair before he brought them to the bay and killed them. Stuffed and mounted, they now glare at visitors to the Fields Museum in Chicago. Patterson’s account of the lionine, The Man-Eaters of Tsavo, was an international best-seller when published in 1907. Still, in print, the book has made Tsavo’s lions notorious. That annoys some scientists. “People don’t want to give up on mythology,” Dennis King told me one day. The zoologist has been working in Tsavo off and on for four years. “ I am so sick of this man-eater business. Patterson made a helluva lot of money off that story, but Tsavo’s lions are no more likely to turn man-eater than lions from elsewhere.”

But tales of their savagery and wiliness don’t all come from sensationalist authors looking to make a buck. Tsavo lions are generally larger than lions elsewhere, enabling them to take down the predominant prey animal in Tsavo, the Cape buffalo, one the strongest, most aggressive animals of Earth. The buffalo don’t give up easily: They often kill or severely injure an attacking lion, and a wounded lion might be more likely to turn to cattle and humans for food.

And other prey is less abundant in Tsavo than in other traditional lion haunts. A hungry lion is more likely to attack humans. Safari guides and Kenya Wildlife Service rangers tell of lions attacking Land Rovers, raiding camps, stalking tourists. Tsavo is a tough neighborhood, they say, and it breeds tougher lions.

But are they really tougher? And if so, is there any connection between their manelessness and their ferocity? An intriguing hypothesis was advanced two years ago by Gnoske and Peterhans: Tsavo lions may be similar to the unmanned cave lions of the Pleistocene. The Serengeti variety is among the most evolved of the species –the latest model, so to speak–while certain morphological differences in Tsavo lions (bigger bodies, smaller skulls, and maybe even lack a mane) suggest that they are closer to the primitive ancestor of all lions. Craig and Peyton had serious doubts about this idea but admitted that Tsavo lions pose a mystery to science.

CAT/2004(RC)

Question. 86

According to the passage, which of the following has NOT contributed to the popular image of Tsavo lions as savage creatures?

Comprehension

Directions for the question: Read the passage carefully and answer the given questions accordingly.

Fifty feet away three male lions lay by the road. They didn’t appear to have hair on their heads. Noting the color of their noses (leonine noses darken as they age, from pink to black), Craig estimated that they were six years old–young adults. “This is wonderful!” he said, after staring at them for several moments. “This is what we came to see. They really are maneless.” Craig, a professor at the University of Minnesota is arguably the leading expert on the majestic Serengeti lion, whose head is mantled in long, thick hair. Hair and Peyton West, a doctoral student who has been working with him in Tanzania, had never seen the Tsavo lions that live some 200 miles east of the Serengeti. This scientist had partly suspected that the maneless males were adolescents mistaken for adults by amateur observers. Now they knew better.

The Tsavo research expedition was mostly Peyton’s show. She had spent several years in Tanzania, compiling the data she needed to answer a question that ought to have been answered long ago: why do lions have manes? It’s the only cat, wild or domestic that displays such ornamentation. In Tsavo, she was attacking the riddle from the opposite angle. Why do its lions not have manes? (Some “maneless” lions in Tsavo East do have partial manes, but they rarely attain the regal glory of the Serengeti lions’.) Does environmental adaptation account for the trait? Are the lions of Tsavo, as some people believe, a distinct subspecies of their Serengeti cousins?

The Serengeti lions have been under continuous observation for more than 35 years, beginning with George Schaller’s pioneering work in the 1960s. But the lions in Tsavo, Kenya’s oldest and largest protected ecosystem, have hardly been studied. Consequently, legends have grown up around them. Not only do they look different, according to the myths, but they also behave differently, displaying greater cunning and aggressiveness. “Remember too,” Kenya: The Rough Guide warns, “Tsavo’s lions have a reputation of ferocity.” Their fearsome image became well-known in 1898 when two males stalled construction of what is now Kenya Railways by allegedly killing and eating 135 Indian and African laborers. A British Army officer in charge of building a railroad bridge over the Tsavo River, Lt. Col. J. H. Patterson, spent nine months pursuing the pair before he brought them to the bay and killed them. Stuffed and mounted, they now glare at visitors to the Fields Museum in Chicago. Patterson’s account of the lionine, The Man-Eaters of Tsavo, was an international best-seller when published in 1907. Still, in print, the book has made Tsavo’s lions notorious. That annoys some scientists. “People don’t want to give up on mythology,” Dennis King told me one day. The zoologist has been working in Tsavo off and on for four years. “ I am so sick of this man-eater business. Patterson made a helluva lot of money off that story, but Tsavo’s lions are no more likely to turn man-eater than lions from elsewhere.”

But tales of their savagery and wiliness don’t all come from sensationalist authors looking to make a buck. Tsavo lions are generally larger than lions elsewhere, enabling them to take down the predominant prey animal in Tsavo, the Cape buffalo, one the strongest, most aggressive animals of Earth. The buffalo don’t give up easily: They often kill or severely injure an attacking lion, and a wounded lion might be more likely to turn to cattle and humans for food.

And other prey is less abundant in Tsavo than in other traditional lion haunts. A hungry lion is more likely to attack humans. Safari guides and Kenya Wildlife Service rangers tell of lions attacking Land Rovers, raiding camps, stalking tourists. Tsavo is a tough neighborhood, they say, and it breeds tougher lions.

But are they really tougher? And if so, is there any connection between their manelessness and their ferocity? An intriguing hypothesis was advanced two years ago by Gnoske and Peterhans: Tsavo lions may be similar to the unmanned cave lions of the Pleistocene. The Serengeti variety is among the most evolved of the species –the latest model, so to speak–while certain morphological differences in Tsavo lions (bigger bodies, smaller skulls, and maybe even lack a mane) suggest that they are closer to the primitive ancestor of all lions. Craig and Peyton had serious doubts about this idea but admitted that Tsavo lions pose a mystery to science.

CAT/2004(RC)

Question. 87

The sentence which concludes the first paragraph, “Now they knew better”, implies that:

Comprehension

Directions for the question: Read the passage carefully and answer the given questions accordingly.

Fifty feet away three male lions lay by the road. They didn’t appear to have hair on their heads. Noting the color of their noses (leonine noses darken as they age, from pink to black), Craig estimated that they were six years old–young adults. “This is wonderful!” he said, after staring at them for several moments. “This is what we came to see. They really are maneless.” Craig, a professor at the University of Minnesota is arguably the leading expert on the majestic Serengeti lion, whose head is mantled in long, thick hair. Hair and Peyton West, a doctoral student who has been working with him in Tanzania, had never seen the Tsavo lions that live some 200 miles east of the Serengeti. This scientist had partly suspected that the maneless males were adolescents mistaken for adults by amateur observers. Now they knew better.

The Tsavo research expedition was mostly Peyton’s show. She had spent several years in Tanzania, compiling the data she needed to answer a question that ought to have been answered long ago: why do lions have manes? It’s the only cat, wild or domestic that displays such ornamentation. In Tsavo, she was attacking the riddle from the opposite angle. Why do its lions not have manes? (Some “maneless” lions in Tsavo East do have partial manes, but they rarely attain the regal glory of the Serengeti lions’.) Does environmental adaptation account for the trait? Are the lions of Tsavo, as some people believe, a distinct subspecies of their Serengeti cousins?

The Serengeti lions have been under continuous observation for more than 35 years, beginning with George Schaller’s pioneering work in the 1960s. But the lions in Tsavo, Kenya’s oldest and largest protected ecosystem, have hardly been studied. Consequently, legends have grown up around them. Not only do they look different, according to the myths, but they also behave differently, displaying greater cunning and aggressiveness. “Remember too,” Kenya: The Rough Guide warns, “Tsavo’s lions have a reputation of ferocity.” Their fearsome image became well-known in 1898 when two males stalled construction of what is now Kenya Railways by allegedly killing and eating 135 Indian and African laborers. A British Army officer in charge of building a railroad bridge over the Tsavo River, Lt. Col. J. H. Patterson, spent nine months pursuing the pair before he brought them to the bay and killed them. Stuffed and mounted, they now glare at visitors to the Fields Museum in Chicago. Patterson’s account of the lionine, The Man-Eaters of Tsavo, was an international best-seller when published in 1907. Still, in print, the book has made Tsavo’s lions notorious. That annoys some scientists. “People don’t want to give up on mythology,” Dennis King told me one day. The zoologist has been working in Tsavo off and on for four years. “ I am so sick of this man-eater business. Patterson made a helluva lot of money off that story, but Tsavo’s lions are no more likely to turn man-eater than lions from elsewhere.”

But tales of their savagery and wiliness don’t all come from sensationalist authors looking to make a buck. Tsavo lions are generally larger than lions elsewhere, enabling them to take down the predominant prey animal in Tsavo, the Cape buffalo, one the strongest, most aggressive animals of Earth. The buffalo don’t give up easily: They often kill or severely injure an attacking lion, and a wounded lion might be more likely to turn to cattle and humans for food.

And other prey is less abundant in Tsavo than in other traditional lion haunts. A hungry lion is more likely to attack humans. Safari guides and Kenya Wildlife Service rangers tell of lions attacking Land Rovers, raiding camps, stalking tourists. Tsavo is a tough neighborhood, they say, and it breeds tougher lions.

But are they really tougher? And if so, is there any connection between their manelessness and their ferocity? An intriguing hypothesis was advanced two years ago by Gnoske and Peterhans: Tsavo lions may be similar to the unmanned cave lions of the Pleistocene. The Serengeti variety is among the most evolved of the species –the latest model, so to speak–while certain morphological differences in Tsavo lions (bigger bodies, smaller skulls, and maybe even lack a mane) suggest that they are closer to the primitive ancestor of all lions. Craig and Peyton had serious doubts about this idea but admitted that Tsavo lions pose a mystery to science.

CAT/2004(RC)

Question. 88

Which of the following, if true, would weaken the hypothesis advanced by Gnoske and Peterhans most?

Comprehension

Directions for Questions: Read the passage carefully and answer the given questions accordingly.

The controversy over genetically-modified food continues unabated in the West. Genetic modification (GM) is the science by which the genetic material of a plant is altered, perhaps to make it more resistant to pests or killer weeds or to enhance its nutritional value. Many food biotechnologists claim that GM will be a major contribution of science to mankind in the 21st century. On the other hand, large numbers of opponents, mainly in Europe, claim that the benefits of GM are a myth propagated by multinational corporations to increase their profits, that they pose a health hazard, and have therefore called for governments to ban the sale of genetically-modified food.

The anti-GM campaign has been quite effective in Europe, with several European Union member countries imposing a virtual ban for five years over genetically modified food imports. Since the genetically modified food industry is particularly strong in the United States of America, the controversy also constitutes another chapter in the US-Europe skirmishes which have become particularly acerbic after the US invasion of Iraq.

To a large extent, the GM controversy has been ignored in the Indian media, although Indian biotechnologists have been quite active in GM research. Several groups of Indian biotechnologists have been working on various issues connected with crops grown in India One concrete achievement which has recently figured in the news is that of a team led by the former vice-chancellor of Jawaharlal Nehru University, Asis Datta–it has successfully added an extra gene to potatoes to enhance the protein content of the tuber by at least 30 percent. Not surprisingly, the new potato has been called the protato. The protato is now in its third year of field trials. It is quite likely that the GM controversy will soon hit the headlines in India since a spokesperson of the Indian Central government has recently announced that the government may use the protato in its midday meal programme for school as early as next year.

Why should “scientific progress”, with huge potential benefits to the poor and malnourished, be so controversial. The anti-GM lobby contends that pernicious propaganda has vastly exaggerated the benefits of GM and completely evaded the costs which will have to be incurred if the genetically-modified food industry is allowed to grow unchecked. In particular, they allude to different types of costs.

This group contends that the most important potential cost is that the widespread distribution and growth of genetically-modified food will enable the corporate world (alias the multinational corporations - MNCs) to completely capture the food chain. A “Small” group of biotech companies will patent the transferred genes as well as the technology associated with them. They will then buy up the competing seed merchants and seed-breeding centres, thereby controlling the production of food at every possible level. Independent farmers, big and small, will be completely wiped out of the food industry. At best, they will be reduced to the status of being sub-contractors.

This line of argument goes on to claim that the control of the food chain will be disastrous for the poor since the MNCs guided by the profit motive, will only focus on the high-value food items demanded by the affluent. Thus, in the long run, the production of basic staples which constitute the food basket of the poor will taper off

However, this vastly overestimates the power of the MNCs. Even if the research promoted by them does focus on the high-value food items, much of biotechnology research is also funded by governments in both developing and developed countries. Indeed, the protato is a by-product of this type of research. If the protato passes the field trials, there is no reason to believe that it cannot be marketed in the global potato market. And this type of success story can be repeated with other basic food items.

CAT/2003(RC)

Question. 89

According to the passage, biotechnology research

Comprehension

Directions for Questions: Read the passage carefully and answer the given questions accordingly.

The controversy over genetically-modified food continues unabated in the West. Genetic modification (GM) is the science by which the genetic material of a plant is altered, perhaps to make it more resistant to pests or killer weeds or to enhance its nutritional value. Many food biotechnologists claim that GM will be a major contribution of science to mankind in the 21st century. On the other hand, large numbers of opponents, mainly in Europe, claim that the benefits of GM are a myth propagated by multinational corporations to increase their profits, that they pose a health hazard, and have therefore called for governments to ban the sale of genetically-modified food.

The anti-GM campaign has been quite effective in Europe, with several European Union member countries imposing a virtual ban for five years over genetically modified food imports. Since the genetically modified food industry is particularly strong in the United States of America, the controversy also constitutes another chapter in the US-Europe skirmishes which have become particularly acerbic after the US invasion of Iraq.

To a large extent, the GM controversy has been ignored in the Indian media, although Indian biotechnologists have been quite active in GM research. Several groups of Indian biotechnologists have been working on various issues connected with crops grown in India One concrete achievement which has recently figured in the news is that of a team led by the former vice-chancellor of Jawaharlal Nehru University, Asis Datta–it has successfully added an extra gene to potatoes to enhance the protein content of the tuber by at least 30 percent. Not surprisingly, the new potato has been called the protato. The protato is now in its third year of field trials. It is quite likely that the GM controversy will soon hit the headlines in India since a spokesperson of the Indian Central government has recently announced that the government may use the protato in its midday meal programme for school as early as next year.

Why should “scientific progress”, with huge potential benefits to the poor and malnourished, be so controversial. The anti-GM lobby contends that pernicious propaganda has vastly exaggerated the benefits of GM and completely evaded the costs which will have to be incurred if the genetically-modified food industry is allowed to grow unchecked. In particular, they allude to different types of costs.

This group contends that the most important potential cost is that the widespread distribution and growth of genetically-modified food will enable the corporate world (alias the multinational corporations - MNCs) to completely capture the food chain. A “Small” group of biotech companies will patent the transferred genes as well as the technology associated with them. They will then buy up the competing seed merchants and seed-breeding centres, thereby controlling the production of food at every possible level. Independent farmers, big and small, will be completely wiped out of the food industry. At best, they will be reduced to the status of being sub-contractors.

This line of argument goes on to claim that the control of the food chain will be disastrous for the poor since the MNCs guided by the profit motive, will only focus on the high-value food items demanded by the affluent. Thus, in the long run, the production of basic staples which constitute the food basket of the poor will taper off

However, this vastly overestimates the power of the MNCs. Even if the research promoted by them does focus on the high-value food items, much of biotechnology research is also funded by governments in both developing and developed countries. Indeed, the protato is a by-product of this type of research. If the protato passes the field trials, there is no reason to believe that it cannot be marketed in the global potato market. And this type of success story can be repeated with other basic food items.

CAT/2003(RC)

Question. 90

Genetic modification makes plants more resistant to killer weeds. However, this can lead to environmental damage by

Comprehension

Directions for Questions: Read the passage carefully and answer the given questions accordingly.

The controversy over genetically-modified food continues unabated in the West. Genetic modification (GM) is the science by which the genetic material of a plant is altered, perhaps to make it more resistant to pests or killer weeds or to enhance its nutritional value. Many food biotechnologists claim that GM will be a major contribution of science to mankind in the 21st century. On the other hand, large numbers of opponents, mainly in Europe, claim that the benefits of GM are a myth propagated by multinational corporations to increase their profits, that they pose a health hazard, and have therefore called for governments to ban the sale of genetically-modified food.

The anti-GM campaign has been quite effective in Europe, with several European Union member countries imposing a virtual ban for five years over genetically modified food imports. Since the genetically modified food industry is particularly strong in the United States of America, the controversy also constitutes another chapter in the US-Europe skirmishes which have become particularly acerbic after the US invasion of Iraq.

To a large extent, the GM controversy has been ignored in the Indian media, although Indian biotechnologists have been quite active in GM research. Several groups of Indian biotechnologists have been working on various issues connected with crops grown in India One concrete achievement which has recently figured in the news is that of a team led by the former vice-chancellor of Jawaharlal Nehru University, Asis Datta–it has successfully added an extra gene to potatoes to enhance the protein content of the tuber by at least 30 percent. Not surprisingly, the new potato has been called the protato. The protato is now in its third year of field trials. It is quite likely that the GM controversy will soon hit the headlines in India since a spokesperson of the Indian Central government has recently announced that the government may use the protato in its midday meal programme for school as early as next year.

Why should “scientific progress”, with huge potential benefits to the poor and malnourished, be so controversial. The anti-GM lobby contends that pernicious propaganda has vastly exaggerated the benefits of GM and completely evaded the costs which will have to be incurred if the genetically-modified food industry is allowed to grow unchecked. In particular, they allude to different types of costs.

This group contends that the most important potential cost is that the widespread distribution and growth of genetically-modified food will enable the corporate world (alias the multinational corporations - MNCs) to completely capture the food chain. A “Small” group of biotech companies will patent the transferred genes as well as the technology associated with them. They will then buy up the competing seed merchants and seed-breeding centres, thereby controlling the production of food at every possible level. Independent farmers, big and small, will be completely wiped out of the food industry. At best, they will be reduced to the status of being sub-contractors.

This line of argument goes on to claim that the control of the food chain will be disastrous for the poor since the MNCs guided by the profit motive, will only focus on the high-value food items demanded by the affluent. Thus, in the long run, the production of basic staples which constitute the food basket of the poor will taper off

However, this vastly overestimates the power of the MNCs. Even if the research promoted by them does focus on the high-value food items, much of biotechnology research is also funded by governments in both developing and developed countries. Indeed, the protato is a by-product of this type of research. If the protato passes the field trials, there is no reason to believe that it cannot be marketed in the global potato market. And this type of success story can be repeated with other basic food items.

CAT/2003(RC)

Question. 91

Which of the following about the Indian media’s coverage of scientific research does the passage seem to suggest?

Comprehension

Directions for Questions: Read the passage carefully and answer the given questions accordingly.

The controversy over genetically-modified food continues unabated in the West. Genetic modification (GM) is the science by which the genetic material of a plant is altered, perhaps to make it more resistant to pests or killer weeds or to enhance its nutritional value. Many food biotechnologists claim that GM will be a major contribution of science to mankind in the 21st century. On the other hand, large numbers of opponents, mainly in Europe, claim that the benefits of GM are a myth propagated by multinational corporations to increase their profits, that they pose a health hazard, and have therefore called for governments to ban the sale of genetically-modified food.

The anti-GM campaign has been quite effective in Europe, with several European Union member countries imposing a virtual ban for five years over genetically modified food imports. Since the genetically modified food industry is particularly strong in the United States of America, the controversy also constitutes another chapter in the US-Europe skirmishes which have become particularly acerbic after the US invasion of Iraq.

To a large extent, the GM controversy has been ignored in the Indian media, although Indian biotechnologists have been quite active in GM research. Several groups of Indian biotechnologists have been working on various issues connected with crops grown in India One concrete achievement which has recently figured in the news is that of a team led by the former vice-chancellor of Jawaharlal Nehru University, Asis Datta–it has successfully added an extra gene to potatoes to enhance the protein content of the tuber by at least 30 percent. Not surprisingly, the new potato has been called the protato. The protato is now in its third year of field trials. It is quite likely that the GM controversy will soon hit the headlines in India since a spokesperson of the Indian Central government has recently announced that the government may use the protato in its midday meal programme for school as early as next year.

Why should “scientific progress”, with huge potential benefits to the poor and malnourished, be so controversial. The anti-GM lobby contends that pernicious propaganda has vastly exaggerated the benefits of GM and completely evaded the costs which will have to be incurred if the genetically-modified food industry is allowed to grow unchecked. In particular, they allude to different types of costs.

This group contends that the most important potential cost is that the widespread distribution and growth of genetically-modified food will enable the corporate world (alias the multinational corporations - MNCs) to completely capture the food chain. A “Small” group of biotech companies will patent the transferred genes as well as the technology associated with them. They will then buy up the competing seed merchants and seed-breeding centres, thereby controlling the production of food at every possible level. Independent farmers, big and small, will be completely wiped out of the food industry. At best, they will be reduced to the status of being sub-contractors.

This line of argument goes on to claim that the control of the food chain will be disastrous for the poor since the MNCs guided by the profit motive, will only focus on the high-value food items demanded by the affluent. Thus, in the long run, the production of basic staples which constitute the food basket of the poor will taper off

However, this vastly overestimates the power of the MNCs. Even if the research promoted by them does focus on the high-value food items, much of biotechnology research is also funded by governments in both developing and developed countries. Indeed, the protato is a by-product of this type of research. If the protato passes the field trials, there is no reason to believe that it cannot be marketed in the global potato market. And this type of success story can be repeated with other basic food items.

CAT/2003(RC)

Question. 92

The author doubts the anti-GM lobby’s contention that MNC control of the food chain will be disastrous for the poor because

Comprehension

Directions for Questions: Read the passage carefully and answer the given questions accordingly.

The controversy over genetically-modified food continues unabated in the West. Genetic modification (GM) is the science by which the genetic material of a plant is altered, perhaps to make it more resistant to pests or killer weeds or to enhance its nutritional value. Many food biotechnologists claim that GM will be a major contribution of science to mankind in the 21st century. On the other hand, large numbers of opponents, mainly in Europe, claim that the benefits of GM are a myth propagated by multinational corporations to increase their profits, that they pose a health hazard, and have therefore called for governments to ban the sale of genetically-modified food.

The anti-GM campaign has been quite effective in Europe, with several European Union member countries imposing a virtual ban for five years over genetically modified food imports. Since the genetically modified food industry is particularly strong in the United States of America, the controversy also constitutes another chapter in the US-Europe skirmishes which have become particularly acerbic after the US invasion of Iraq.

To a large extent, the GM controversy has been ignored in the Indian media, although Indian biotechnologists have been quite active in GM research. Several groups of Indian biotechnologists have been working on various issues connected with crops grown in India One concrete achievement which has recently figured in the news is that of a team led by the former vice-chancellor of Jawaharlal Nehru University, Asis Datta–it has successfully added an extra gene to potatoes to enhance the protein content of the tuber by at least 30 percent. Not surprisingly, the new potato has been called the protato. The protato is now in its third year of field trials. It is quite likely that the GM controversy will soon hit the headlines in India since a spokesperson of the Indian Central government has recently announced that the government may use the protato in its midday meal programme for school as early as next year.

Why should “scientific progress”, with huge potential benefits to the poor and malnourished, be so controversial. The anti-GM lobby contends that pernicious propaganda has vastly exaggerated the benefits of GM and completely evaded the costs which will have to be incurred if the genetically-modified food industry is allowed to grow unchecked. In particular, they allude to different types of costs.

This group contends that the most important potential cost is that the widespread distribution and growth of genetically-modified food will enable the corporate world (alias the multinational corporations - MNCs) to completely capture the food chain. A “Small” group of biotech companies will patent the transferred genes as well as the technology associated with them. They will then buy up the competing seed merchants and seed-breeding centres, thereby controlling the production of food at every possible level. Independent farmers, big and small, will be completely wiped out of the food industry. At best, they will be reduced to the status of being sub-contractors.

This line of argument goes on to claim that the control of the food chain will be disastrous for the poor since the MNCs guided by the profit motive, will only focus on the high-value food items demanded by the affluent. Thus, in the long run, the production of basic staples which constitute the food basket of the poor will taper off

However, this vastly overestimates the power of the MNCs. Even if the research promoted by them does focus on the high-value food items, much of biotechnology research is also funded by governments in both developing and developed countries. Indeed, the protato is a by-product of this type of research. If the protato passes the field trials, there is no reason to believe that it cannot be marketed in the global potato market. And this type of success story can be repeated with other basic food items.

CAT/2003(RC)

Question. 93

Using the clues in the passage, which of the following countries would you expect to be at the forefront of the anti-GM campaign

Comprehension

Directions for the question: Read the passage carefully and answer the given questions accordingly.

Modern science, exclusive of geometry, is a comparatively recent creation and can be said to have originated with Galileo and Newton. Galileo was the first scientist to recognize clearly that the only way to further our understanding of the physical world was to resort to experiment. However obvious Galileo’s contention may appear in the light of our present knowledge, it remains a fact that the Greeks, in spite of their proficiency in geometry, never seem to have realized the importance of the experiment. To a certain extent, this may be attributed to the crudeness of their instruments of measurement. Still, an excuse of this sort can scarcely be put forward when the elementary nature of Galileo’s experiments and observations is recalled. Watching a lamp oscillate in the cathedral of Pisa, dropping bodies from the leaning tower of Pisa, rolling balls down inclined planes, noticing the magnifying effect of water in a spherical glass vase, such as the nature of Galileo’s experiments and observation. As can be seen, they might just as well have been performed by the Greeks. At any rate, it was thanks to such experiments that Galileo discovered the fundamental law of dynamics, according to which the acceleration imparted to a body is proportional to the force acting upon it.

The next advance was due to Newton, the greatest scientist of all time if an account is taken of his joint contributions to mathematics and physics. As a physicist, he was of course an ardent adherent of the empirical method, but his greatest title to fame lies in another direction. Prior to Newton, mathematics, chiefly in the form of geometry, had been studied as a fine art without any view to its physical applications other than in very trivial cases. But with Newton, all the resources of mathematics were turned to an advantage in the solution of physical problems. Thenceforth mathematics appeared as an instrument of discovery, the most powerful one known to man, multiplying the power of thought just as in the mechanical domain the lever multiplied our physical action. It is this application of mathematics to the solution of physical problems, this combination of two separate fields of investigation, which constitutes the essential characteristic of the Newtonian method. Thus problems of physics were metamorphosed into problems of mathematics.

But in Newton’s day the mathematical instrument was still in a very backward state of development. In this field again Newton showed the mark of genius by inventing the integral calculus. As a result of this remarkable discovery, problems, which would have baffled Archimedes, were solved with ease. We know that in Newton’s hands this new departure in scientific method led to the discovery of the law of gravitation. But here again, the real significance of Newton’s achievement lay not so much in the exact quantitative formulation of the law of attraction, as in his having established the presence of law and order at least in one important realm of nature, namely in the motion of heavenly bodies. Nature thus exhibited rationality and was not mere blind chaos and uncertainty. To be sure, Newton’s investigations had been concerned with but a small group of natural phenomena, but it appeared unlikely that this mathematical law and order should turn out to be restricted to certain special phenomena, and the feeling was general that all the physical processes of nature would prove to be unfolding themselves according to rigorous mathematical laws.

When Einstein, in 1905, published his celebrated paper on the electrodynamics, together with the negative experiments of Michelson and others, would be obviated if we extended the validity of the Newtonian principle of the relativity of Galilean motion, Which applied solely to mechanical phenomena, so as to include all manner of phenomena: electrodynamics, optical etc. When extended in this way the Newtonian principle of relativity became Einstein’s special principle of relativity. Its significance lay in its assertion that absolute Galilean motion or absolute velocity must ever escape all experimental detection. Henceforth absolute velocity should be conceived of as physically meaningless, not only in the particular realm of mechanics, as in Newton’s day, but in the entire realm of physical phenomena. Einstein’s special principle, by adding increased emphasis to this relativity of velocity, making absolute velocity metaphysically meaningless, created a still more profound distinction between velocity and accelerated or rotational motion. This latter type of motion remained as absolute and real as before. It is most important to understand this point and to realize that Einstein’s special principle is merely an extension of the validity of the classical Newtonian principle to all classes of phenomena.

CAT/2003(RC)

Question. 94

Which of the following statements about modern science best captures the theme of the passage?

Comprehension

Directions for the question: Read the passage carefully and answer the given questions accordingly.

Modern science, exclusive of geometry, is a comparatively recent creation and can be said to have originated with Galileo and Newton. Galileo was the first scientist to recognize clearly that the only way to further our understanding of the physical world was to resort to experiment. However obvious Galileo’s contention may appear in the light of our present knowledge, it remains a fact that the Greeks, in spite of their proficiency in geometry, never seem to have realized the importance of the experiment. To a certain extent, this may be attributed to the crudeness of their instruments of measurement. Still, an excuse of this sort can scarcely be put forward when the elementary nature of Galileo’s experiments and observations is recalled. Watching a lamp oscillate in the cathedral of Pisa, dropping bodies from the leaning tower of Pisa, rolling balls down inclined planes, noticing the magnifying effect of water in a spherical glass vase, such as the nature of Galileo’s experiments and observation. As can be seen, they might just as well have been performed by the Greeks. At any rate, it was thanks to such experiments that Galileo discovered the fundamental law of dynamics, according to which the acceleration imparted to a body is proportional to the force acting upon it.

The next advance was due to Newton, the greatest scientist of all time if an account is taken of his joint contributions to mathematics and physics. As a physicist, he was of course an ardent adherent of the empirical method, but his greatest title to fame lies in another direction. Prior to Newton, mathematics, chiefly in the form of geometry, had been studied as a fine art without any view to its physical applications other than in very trivial cases. But with Newton, all the resources of mathematics were turned to an advantage in the solution of physical problems. Thenceforth mathematics appeared as an instrument of discovery, the most powerful one known to man, multiplying the power of thought just as in the mechanical domain the lever multiplied our physical action. It is this application of mathematics to the solution of physical problems, this combination of two separate fields of investigation, which constitutes the essential characteristic of the Newtonian method. Thus problems of physics were metamorphosed into problems of mathematics.

But in Newton’s day the mathematical instrument was still in a very backward state of development. In this field again Newton showed the mark of genius by inventing the integral calculus. As a result of this remarkable discovery, problems, which would have baffled Archimedes, were solved with ease. We know that in Newton’s hands this new departure in scientific method led to the discovery of the law of gravitation. But here again, the real significance of Newton’s achievement lay not so much in the exact quantitative formulation of the law of attraction, as in his having established the presence of law and order at least in one important realm of nature, namely in the motion of heavenly bodies. Nature thus exhibited rationality and was not mere blind chaos and uncertainty. To be sure, Newton’s investigations had been concerned with but a small group of natural phenomena, but it appeared unlikely that this mathematical law and order should turn out to be restricted to certain special phenomena, and the feeling was general that all the physical processes of nature would prove to be unfolding themselves according to rigorous mathematical laws.

When Einstein, in 1905, published his celebrated paper on the electrodynamics, together with the negative experiments of Michelson and others, would be obviated if we extended the validity of the Newtonian principle of the relativity of Galilean motion, Which applied solely to mechanical phenomena, so as to include all manner of phenomena: electrodynamics, optical etc. When extended in this way the Newtonian principle of relativity became Einstein’s special principle of relativity. Its significance lay in its assertion that absolute Galilean motion or absolute velocity must ever escape all experimental detection. Henceforth absolute velocity should be conceived of as physically meaningless, not only in the particular realm of mechanics, as in Newton’s day, but in the entire realm of physical phenomena. Einstein’s special principle, by adding increased emphasis to this relativity of velocity, making absolute velocity metaphysically meaningless, created a still more profound distinction between velocity and accelerated or rotational motion. This latter type of motion remained as absolute and real as before. It is most important to understand this point and to realize that Einstein’s special principle is merely an extension of the validity of the classical Newtonian principle to all classes of phenomena.

CAT/2003(RC)

Question. 95

The significant implication of Einstein’s special principle of relativity is that

Comprehension

Directions for the question: Read the passage carefully and answer the given questions accordingly.

Modern science, exclusive of geometry, is a comparatively recent creation and can be said to have originated with Galileo and Newton. Galileo was the first scientist to recognize clearly that the only way to further our understanding of the physical world was to resort to experiment. However obvious Galileo’s contention may appear in the light of our present knowledge, it remains a fact that the Greeks, in spite of their proficiency in geometry, never seem to have realized the importance of the experiment. To a certain extent, this may be attributed to the crudeness of their instruments of measurement. Still, an excuse of this sort can scarcely be put forward when the elementary nature of Galileo’s experiments and observations is recalled. Watching a lamp oscillate in the cathedral of Pisa, dropping bodies from the leaning tower of Pisa, rolling balls down inclined planes, noticing the magnifying effect of water in a spherical glass vase, such as the nature of Galileo’s experiments and observation. As can be seen, they might just as well have been performed by the Greeks. At any rate, it was thanks to such experiments that Galileo discovered the fundamental law of dynamics, according to which the acceleration imparted to a body is proportional to the force acting upon it.

The next advance was due to Newton, the greatest scientist of all time if an account is taken of his joint contributions to mathematics and physics. As a physicist, he was of course an ardent adherent of the empirical method, but his greatest title to fame lies in another direction. Prior to Newton, mathematics, chiefly in the form of geometry, had been studied as a fine art without any view to its physical applications other than in very trivial cases. But with Newton, all the resources of mathematics were turned to an advantage in the solution of physical problems. Thenceforth mathematics appeared as an instrument of discovery, the most powerful one known to man, multiplying the power of thought just as in the mechanical domain the lever multiplied our physical action. It is this application of mathematics to the solution of physical problems, this combination of two separate fields of investigation, which constitutes the essential characteristic of the Newtonian method. Thus problems of physics were metamorphosed into problems of mathematics.

But in Newton’s day the mathematical instrument was still in a very backward state of development. In this field again Newton showed the mark of genius by inventing the integral calculus. As a result of this remarkable discovery, problems, which would have baffled Archimedes, were solved with ease. We know that in Newton’s hands this new departure in scientific method led to the discovery of the law of gravitation. But here again, the real significance of Newton’s achievement lay not so much in the exact quantitative formulation of the law of attraction, as in his having established the presence of law and order at least in one important realm of nature, namely in the motion of heavenly bodies. Nature thus exhibited rationality and was not mere blind chaos and uncertainty. To be sure, Newton’s investigations had been concerned with but a small group of natural phenomena, but it appeared unlikely that this mathematical law and order should turn out to be restricted to certain special phenomena, and the feeling was general that all the physical processes of nature would prove to be unfolding themselves according to rigorous mathematical laws.

When Einstein, in 1905, published his celebrated paper on the electrodynamics, together with the negative experiments of Michelson and others, would be obviated if we extended the validity of the Newtonian principle of the relativity of Galilean motion, Which applied solely to mechanical phenomena, so as to include all manner of phenomena: electrodynamics, optical etc. When extended in this way the Newtonian principle of relativity became Einstein’s special principle of relativity. Its significance lay in its assertion that absolute Galilean motion or absolute velocity must ever escape all experimental detection. Henceforth absolute velocity should be conceived of as physically meaningless, not only in the particular realm of mechanics, as in Newton’s day, but in the entire realm of physical phenomena. Einstein’s special principle, by adding increased emphasis to this relativity of velocity, making absolute velocity metaphysically meaningless, created a still more profound distinction between velocity and accelerated or rotational motion. This latter type of motion remained as absolute and real as before. It is most important to understand this point and to realize that Einstein’s special principle is merely an extension of the validity of the classical Newtonian principle to all classes of phenomena.

CAT/2003(RC)

Question. 96

The statement “Nature thus exhibited rationality and was not mere blind chaos and uncertainty” suggests that

Comprehension

Directions for the question: Read the passage carefully and answer the given questions accordingly.

Modern science, exclusive of geometry, is a comparatively recent creation and can be said to have originated with Galileo and Newton. Galileo was the first scientist to recognize clearly that the only way to further our understanding of the physical world was to resort to experiment. However obvious Galileo’s contention may appear in the light of our present knowledge, it remains a fact that the Greeks, in spite of their proficiency in geometry, never seem to have realized the importance of the experiment. To a certain extent, this may be attributed to the crudeness of their instruments of measurement. Still, an excuse of this sort can scarcely be put forward when the elementary nature of Galileo’s experiments and observations is recalled. Watching a lamp oscillate in the cathedral of Pisa, dropping bodies from the leaning tower of Pisa, rolling balls down inclined planes, noticing the magnifying effect of water in a spherical glass vase, such as the nature of Galileo’s experiments and observation. As can be seen, they might just as well have been performed by the Greeks. At any rate, it was thanks to such experiments that Galileo discovered the fundamental law of dynamics, according to which the acceleration imparted to a body is proportional to the force acting upon it.

The next advance was due to Newton, the greatest scientist of all time if an account is taken of his joint contributions to mathematics and physics. As a physicist, he was of course an ardent adherent of the empirical method, but his greatest title to fame lies in another direction. Prior to Newton, mathematics, chiefly in the form of geometry, had been studied as a fine art without any view to its physical applications other than in very trivial cases. But with Newton, all the resources of mathematics were turned to an advantage in the solution of physical problems. Thenceforth mathematics appeared as an instrument of discovery, the most powerful one known to man, multiplying the power of thought just as in the mechanical domain the lever multiplied our physical action. It is this application of mathematics to the solution of physical problems, this combination of two separate fields of investigation, which constitutes the essential characteristic of the Newtonian method. Thus problems of physics were metamorphosed into problems of mathematics.

But in Newton’s day the mathematical instrument was still in a very backward state of development. In this field again Newton showed the mark of genius by inventing the integral calculus. As a result of this remarkable discovery, problems, which would have baffled Archimedes, were solved with ease. We know that in Newton’s hands this new departure in scientific method led to the discovery of the law of gravitation. But here again, the real significance of Newton’s achievement lay not so much in the exact quantitative formulation of the law of attraction, as in his having established the presence of law and order at least in one important realm of nature, namely in the motion of heavenly bodies. Nature thus exhibited rationality and was not mere blind chaos and uncertainty. To be sure, Newton’s investigations had been concerned with but a small group of natural phenomena, but it appeared unlikely that this mathematical law and order should turn out to be restricted to certain special phenomena, and the feeling was general that all the physical processes of nature would prove to be unfolding themselves according to rigorous mathematical laws.

When Einstein, in 1905, published his celebrated paper on the electrodynamics, together with the negative experiments of Michelson and others, would be obviated if we extended the validity of the Newtonian principle of the relativity of Galilean motion, Which applied solely to mechanical phenomena, so as to include all manner of phenomena: electrodynamics, optical etc. When extended in this way the Newtonian principle of relativity became Einstein’s special principle of relativity. Its significance lay in its assertion that absolute Galilean motion or absolute velocity must ever escape all experimental detection. Henceforth absolute velocity should be conceived of as physically meaningless, not only in the particular realm of mechanics, as in Newton’s day, but in the entire realm of physical phenomena. Einstein’s special principle, by adding increased emphasis to this relativity of velocity, making absolute velocity metaphysically meaningless, created a still more profound distinction between velocity and accelerated or rotational motion. This latter type of motion remained as absolute and real as before. It is most important to understand this point and to realize that Einstein’s special principle is merely an extension of the validity of the classical Newtonian principle to all classes of phenomena.

CAT/2003(RC)

Question. 97

Newton may be considered one of the greatest scientists of all time because he

Comprehension

Directions for the question: Read the passage carefully and answer the given questions accordingly.

Modern science, exclusive of geometry, is a comparatively recent creation and can be said to have originated with Galileo and Newton. Galileo was the first scientist to recognize clearly that the only way to further our understanding of the physical world was to resort to experiment. However obvious Galileo’s contention may appear in the light of our present knowledge, it remains a fact that the Greeks, in spite of their proficiency in geometry, never seem to have realized the importance of the experiment. To a certain extent, this may be attributed to the crudeness of their instruments of measurement. Still, an excuse of this sort can scarcely be put forward when the elementary nature of Galileo’s experiments and observations is recalled. Watching a lamp oscillate in the cathedral of Pisa, dropping bodies from the leaning tower of Pisa, rolling balls down inclined planes, noticing the magnifying effect of water in a spherical glass vase, such as the nature of Galileo’s experiments and observation. As can be seen, they might just as well have been performed by the Greeks. At any rate, it was thanks to such experiments that Galileo discovered the fundamental law of dynamics, according to which the acceleration imparted to a body is proportional to the force acting upon it.

The next advance was due to Newton, the greatest scientist of all time if an account is taken of his joint contributions to mathematics and physics. As a physicist, he was of course an ardent adherent of the empirical method, but his greatest title to fame lies in another direction. Prior to Newton, mathematics, chiefly in the form of geometry, had been studied as a fine art without any view to its physical applications other than in very trivial cases. But with Newton, all the resources of mathematics were turned to an advantage in the solution of physical problems. Thenceforth mathematics appeared as an instrument of discovery, the most powerful one known to man, multiplying the power of thought just as in the mechanical domain the lever multiplied our physical action. It is this application of mathematics to the solution of physical problems, this combination of two separate fields of investigation, which constitutes the essential characteristic of the Newtonian method. Thus problems of physics were metamorphosed into problems of mathematics.

But in Newton’s day the mathematical instrument was still in a very backward state of development. In this field again Newton showed the mark of genius by inventing the integral calculus. As a result of this remarkable discovery, problems, which would have baffled Archimedes, were solved with ease. We know that in Newton’s hands this new departure in scientific method led to the discovery of the law of gravitation. But here again, the real significance of Newton’s achievement lay not so much in the exact quantitative formulation of the law of attraction, as in his having established the presence of law and order at least in one important realm of nature, namely in the motion of heavenly bodies. Nature thus exhibited rationality and was not mere blind chaos and uncertainty. To be sure, Newton’s investigations had been concerned with but a small group of natural phenomena, but it appeared unlikely that this mathematical law and order should turn out to be restricted to certain special phenomena, and the feeling was general that all the physical processes of nature would prove to be unfolding themselves according to rigorous mathematical laws.

When Einstein, in 1905, published his celebrated paper on the electrodynamics, together with the negative experiments of Michelson and others, would be obviated if we extended the validity of the Newtonian principle of the relativity of Galilean motion, Which applied solely to mechanical phenomena, so as to include all manner of phenomena: electrodynamics, optical etc. When extended in this way the Newtonian principle of relativity became Einstein’s special principle of relativity. Its significance lay in its assertion that absolute Galilean motion or absolute velocity must ever escape all experimental detection. Henceforth absolute velocity should be conceived of as physically meaningless, not only in the particular realm of mechanics, as in Newton’s day, but in the entire realm of physical phenomena. Einstein’s special principle, by adding increased emphasis to this relativity of velocity, making absolute velocity metaphysically meaningless, created a still more profound distinction between velocity and accelerated or rotational motion. This latter type of motion remained as absolute and real as before. It is most important to understand this point and to realize that Einstein’s special principle is merely an extension of the validity of the classical Newtonian principle to all classes of phenomena.

CAT/2003(RC)

Question. 98

According to the author, why did the Greeks NOT conduct experiments to understand the physical world?

Comprehension

Directions for the question: Read the passage carefully and answer the given questions accordingly.

The invention of the gas turbine by Frank Whittle in England and Hans von Ohain in Germany in 1939 signaled the beginning of jet transport. Although the French engineer Lorin had visualized the concept of jet propulsion more than 25 years earlier, it took improved materials and the genius of Whittle and von Ohain to recognize the advantages that a gas turbine offered over a piston engine, including speeds in excess of 350 miles per hour. The progress from the first flight of a liquid propellant rocket and jet-propelled aircraft in 1939 to the first faster-than sound (supersonic) manned airplane (the Bell X-1) in 1947 happened in less than a decade. This then led very rapidly to a series of supersonic fighters and bombers, the first of which became operational in the 1950s. World War II technology foundation and emerging Cold War imperatives then led us into space with the launch of sputnik in 1957 and the placing of the first man on the moon only 12 years later - a mere 24 years after the end of World War II.

Now, a hypersonic flight can take you anywhere in the planet in less than four hours. British Royal Air Force and Royal Navy, and the air forces of several other countries are going to use a single-engine cousin to the F/A-22 called the F-35 joint strike fighter. These planes exhibit stealthy angles and coating that make it difficult for radar to detect them, among aviation’s most cutting-edge advances in design. The V-22, known as tilt-rotor, part helicopter, part airplane, take off vertically, then tilts its engine forward for winged flight. It provides speed, three times, the payload, five times the range of the helicopters it’s meant to replace. The new fighter, F/A-22 Raptor, with more than a million parts, shows a perfect amalgamation of stealth, speed, avionics, and agility.

It seems conventional forms, like the predator and Global Hawk, are passe, the stealthier unmanned aerial vehicles (UAVs) are in. They are shaped like kites, bits, and boomerang, all but invisible to the enemy radar and able to remain over hostile territory without any fear of getting grilled if shot down. Will the UAVs take away pilots’ jobs permanently? Can a computer-operated machine take a smarter and faster decision in a war-like situation? The new free-flight concept will probably supplement the existing air traffic control system by computers on each plane to map the altitude, route, weather, and other planes; and a decade from now, there will be no use of radar anymore.

How much bigger can the airplanes get? In the ‘50s they got speed, in the ‘80s they became stealthy. Now, they are getting smarter thanks to computer automation. The change is quite huge: from the four-seater to the A380 airplane. It seems we are now trading speed for size as we build a new superjumbo jet, the 555 seater A380, which will fly at almost the same speed of the Boeing 707, introduced half a century ago, but with an improved capacity, range, greater fuel economy. A few years down the line will come the truly larger model, to be known as 747X. In the beginning of 2005, the A380, the world’s first fully double-decked superjumbo passenger jet, weighing 1.2 million pounds, may carry a load of about 840 passengers.

Barring the early phase, civil aviation has always lagged behind military technologies (of jet engines, lightweight composite material, etc.). There are two fundamental factors behind the decline in commercial aeronautics in comparison to military aeronautics. There is no collective vision of our future such as the one that drove us in the past. There is also a need for a more aggressive pool of airplane design talents to maintain an industry that continues to find a multibillion-dollar a year market for its product.

Can the history of aviation technology tell us something about the future of aeronautics? Have we reached a final state in our evolution to a mature technology in aeronautics? Are the challenges of coming out with the ‘better, cheaper, faster’ designs somehow inferior to those that are suited for ‘ faster, higher, further? Safety should improve greatly as a result of the forthcoming improvements in airframes, engines, and avionics. Sixty years from now, aircraft will recover on their own if the pilot loses control. Satellites are the key not only to GPS (global positioning system) navigation but also to in-flight communications, uplinked weather, and even in-flight email. Although there is some debate about what type of engines will power future airplanes- lightweight turbines, turbocharged diesel, or both-there is little debate about how these power plants will be controlled. Pilots of the future can look forward to more and better onboard safety equipment.

CAT/2003(RC)

Question. 99

According to the first paragraph of the passage, which of the following statements is NOT false?

Comprehension

Directions for the question: Read the passage carefully and answer the given questions accordingly.

The invention of the gas turbine by Frank Whittle in England and Hans von Ohain in Germany in 1939 signaled the beginning of jet transport. Although the French engineer Lorin had visualized the concept of jet propulsion more than 25 years earlier, it took improved materials and the genius of Whittle and von Ohain to recognize the advantages that a gas turbine offered over a piston engine, including speeds in excess of 350 miles per hour. The progress from the first flight of a liquid propellant rocket and jet-propelled aircraft in 1939 to the first faster-than sound (supersonic) manned airplane (the Bell X-1) in 1947 happened in less than a decade. This then led very rapidly to a series of supersonic fighters and bombers, the first of which became operational in the 1950s. World War II technology foundation and emerging Cold War imperatives then led us into space with the launch of sputnik in 1957 and the placing of the first man on the moon only 12 years later - a mere 24 years after the end of World War II.

Now, a hypersonic flight can take you anywhere in the planet in less than four hours. British Royal Air Force and Royal Navy, and the air forces of several other countries are going to use a single-engine cousin to the F/A-22 called the F-35 joint strike fighter. These planes exhibit stealthy angles and coating that make it difficult for radar to detect them, among aviation’s most cutting-edge advances in design. The V-22, known as tilt-rotor, part helicopter, part airplane, take off vertically, then tilts its engine forward for winged flight. It provides speed, three times, the payload, five times the range of the helicopters it’s meant to replace. The new fighter, F/A-22 Raptor, with more than a million parts, shows a perfect amalgamation of stealth, speed, avionics, and agility.

It seems conventional forms, like the predator and Global Hawk, are passe, the stealthier unmanned aerial vehicles (UAVs) are in. They are shaped like kites, bits, and boomerang, all but invisible to the enemy radar and able to remain over hostile territory without any fear of getting grilled if shot down. Will the UAVs take away pilots’ jobs permanently? Can a computer-operated machine take a smarter and faster decision in a war-like situation? The new free-flight concept will probably supplement the existing air traffic control system by computers on each plane to map the altitude, route, weather, and other planes; and a decade from now, there will be no use of radar anymore.

How much bigger can the airplanes get? In the ‘50s they got speed, in the ‘80s they became stealthy. Now, they are getting smarter thanks to computer automation. The change is quite huge: from the four-seater to the A380 airplane. It seems we are now trading speed for size as we build a new superjumbo jet, the 555 seater A380, which will fly at almost the same speed of the Boeing 707, introduced half a century ago, but with an improved capacity, range, greater fuel economy. A few years down the line will come the truly larger model, to be known as 747X. In the beginning of 2005, the A380, the world’s first fully double-decked superjumbo passenger jet, weighing 1.2 million pounds, may carry a load of about 840 passengers.

Barring the early phase, civil aviation has always lagged behind military technologies (of jet engines, lightweight composite material, etc.). There are two fundamental factors behind the decline in commercial aeronautics in comparison to military aeronautics. There is no collective vision of our future such as the one that drove us in the past. There is also a need for a more aggressive pool of airplane design talents to maintain an industry that continues to find a multibillion-dollar a year market for its product.

Can the history of aviation technology tell us something about the future of aeronautics? Have we reached a final state in our evolution to a mature technology in aeronautics? Are the challenges of coming out with the ‘better, cheaper, faster’ designs somehow inferior to those that are suited for ‘ faster, higher, further? Safety should improve greatly as a result of the forthcoming improvements in airframes, engines, and avionics. Sixty years from now, aircraft will recover on their own if the pilot loses control. Satellites are the key not only to GPS (global positioning system) navigation but also to in-flight communications, uplinked weather, and even in-flight email. Although there is some debate about what type of engines will power future airplanes- lightweight turbines, turbocharged diesel, or both-there is little debate about how these power plants will be controlled. Pilots of the future can look forward to more and better onboard safety equipment.

CAT/2003(RC)

Question. 100

What is the fourth paragraph of the passage, starting, “How much bigger ............,” about?

Comprehension

Directions for the question: Read the passage carefully and answer the given questions accordingly.

The invention of the gas turbine by Frank Whittle in England and Hans von Ohain in Germany in 1939 signaled the beginning of jet transport. Although the French engineer Lorin had visualized the concept of jet propulsion more than 25 years earlier, it took improved materials and the genius of Whittle and von Ohain to recognize the advantages that a gas turbine offered over a piston engine, including speeds in excess of 350 miles per hour. The progress from the first flight of a liquid propellant rocket and jet-propelled aircraft in 1939 to the first faster-than sound (supersonic) manned airplane (the Bell X-1) in 1947 happened in less than a decade. This then led very rapidly to a series of supersonic fighters and bombers, the first of which became operational in the 1950s. World War II technology foundation and emerging Cold War imperatives then led us into space with the launch of sputnik in 1957 and the placing of the first man on the moon only 12 years later - a mere 24 years after the end of World War II.

Now, a hypersonic flight can take you anywhere in the planet in less than four hours. British Royal Air Force and Royal Navy, and the air forces of several other countries are going to use a single-engine cousin to the F/A-22 called the F-35 joint strike fighter. These planes exhibit stealthy angles and coating that make it difficult for radar to detect them, among aviation’s most cutting-edge advances in design. The V-22, known as tilt-rotor, part helicopter, part airplane, take off vertically, then tilts its engine forward for winged flight. It provides speed, three times, the payload, five times the range of the helicopters it’s meant to replace. The new fighter, F/A-22 Raptor, with more than a million parts, shows a perfect amalgamation of stealth, speed, avionics, and agility.

It seems conventional forms, like the predator and Global Hawk, are passe, the stealthier unmanned aerial vehicles (UAVs) are in. They are shaped like kites, bits, and boomerang, all but invisible to the enemy radar and able to remain over hostile territory without any fear of getting grilled if shot down. Will the UAVs take away pilots’ jobs permanently? Can a computer-operated machine take a smarter and faster decision in a war-like situation? The new free-flight concept will probably supplement the existing air traffic control system by computers on each plane to map the altitude, route, weather, and other planes; and a decade from now, there will be no use of radar anymore.

How much bigger can the airplanes get? In the ‘50s they got speed, in the ‘80s they became stealthy. Now, they are getting smarter thanks to computer automation. The change is quite huge: from the four-seater to the A380 airplane. It seems we are now trading speed for size as we build a new superjumbo jet, the 555 seater A380, which will fly at almost the same speed of the Boeing 707, introduced half a century ago, but with an improved capacity, range, greater fuel economy. A few years down the line will come the truly larger model, to be known as 747X. In the beginning of 2005, the A380, the world’s first fully double-decked superjumbo passenger jet, weighing 1.2 million pounds, may carry a load of about 840 passengers.

Barring the early phase, civil aviation has always lagged behind military technologies (of jet engines, lightweight composite material, etc.). There are two fundamental factors behind the decline in commercial aeronautics in comparison to military aeronautics. There is no collective vision of our future such as the one that drove us in the past. There is also a need for a more aggressive pool of airplane design talents to maintain an industry that continues to find a multibillion-dollar a year market for its product.

Can the history of aviation technology tell us something about the future of aeronautics? Have we reached a final state in our evolution to a mature technology in aeronautics? Are the challenges of coming out with the ‘better, cheaper, faster’ designs somehow inferior to those that are suited for ‘ faster, higher, further? Safety should improve greatly as a result of the forthcoming improvements in airframes, engines, and avionics. Sixty years from now, aircraft will recover on their own if the pilot loses control. Satellites are the key not only to GPS (global positioning system) navigation but also to in-flight communications, uplinked weather, and even in-flight email. Although there is some debate about what type of engines will power future airplanes- lightweight turbines, turbocharged diesel, or both-there is little debate about how these power plants will be controlled. Pilots of the future can look forward to more and better onboard safety equipment.

CAT/2003(RC)

Question. 101

What is the most noteworthy difference between V-22 and a standard airplane?

Comprehension

Directions for the question: Read the passage carefully and answer the given questions accordingly.

The invention of the gas turbine by Frank Whittle in England and Hans von Ohain in Germany in 1939 signaled the beginning of jet transport. Although the French engineer Lorin had visualized the concept of jet propulsion more than 25 years earlier, it took improved materials and the genius of Whittle and von Ohain to recognize the advantages that a gas turbine offered over a piston engine, including speeds in excess of 350 miles per hour. The progress from the first flight of a liquid propellant rocket and jet-propelled aircraft in 1939 to the first faster-than sound (supersonic) manned airplane (the Bell X-1) in 1947 happened in less than a decade. This then led very rapidly to a series of supersonic fighters and bombers, the first of which became operational in the 1950s. World War II technology foundation and emerging Cold War imperatives then led us into space with the launch of sputnik in 1957 and the placing of the first man on the moon only 12 years later - a mere 24 years after the end of World War II.

Now, a hypersonic flight can take you anywhere in the planet in less than four hours. British Royal Air Force and Royal Navy, and the air forces of several other countries are going to use a single-engine cousin to the F/A-22 called the F-35 joint strike fighter. These planes exhibit stealthy angles and coating that make it difficult for radar to detect them, among aviation’s most cutting-edge advances in design. The V-22, known as tilt-rotor, part helicopter, part airplane, take off vertically, then tilts its engine forward for winged flight. It provides speed, three times, the payload, five times the range of the helicopters it’s meant to replace. The new fighter, F/A-22 Raptor, with more than a million parts, shows a perfect amalgamation of stealth, speed, avionics, and agility.

It seems conventional forms, like the predator and Global Hawk, are passe, the stealthier unmanned aerial vehicles (UAVs) are in. They are shaped like kites, bits, and boomerang, all but invisible to the enemy radar and able to remain over hostile territory without any fear of getting grilled if shot down. Will the UAVs take away pilots’ jobs permanently? Can a computer-operated machine take a smarter and faster decision in a war-like situation? The new free-flight concept will probably supplement the existing air traffic control system by computers on each plane to map the altitude, route, weather, and other planes; and a decade from now, there will be no use of radar anymore.

How much bigger can the airplanes get? In the ‘50s they got speed, in the ‘80s they became stealthy. Now, they are getting smarter thanks to computer automation. The change is quite huge: from the four-seater to the A380 airplane. It seems we are now trading speed for size as we build a new superjumbo jet, the 555 seater A380, which will fly at almost the same speed of the Boeing 707, introduced half a century ago, but with an improved capacity, range, greater fuel economy. A few years down the line will come the truly larger model, to be known as 747X. In the beginning of 2005, the A380, the world’s first fully double-decked superjumbo passenger jet, weighing 1.2 million pounds, may carry a load of about 840 passengers.

Barring the early phase, civil aviation has always lagged behind military technologies (of jet engines, lightweight composite material, etc.). There are two fundamental factors behind the decline in commercial aeronautics in comparison to military aeronautics. There is no collective vision of our future such as the one that drove us in the past. There is also a need for a more aggressive pool of airplane design talents to maintain an industry that continues to find a multibillion-dollar a year market for its product.

Can the history of aviation technology tell us something about the future of aeronautics? Have we reached a final state in our evolution to a mature technology in aeronautics? Are the challenges of coming out with the ‘better, cheaper, faster’ designs somehow inferior to those that are suited for ‘ faster, higher, further? Safety should improve greatly as a result of the forthcoming improvements in airframes, engines, and avionics. Sixty years from now, aircraft will recover on their own if the pilot loses control. Satellites are the key not only to GPS (global positioning system) navigation but also to in-flight communications, uplinked weather, and even in-flight email. Although there is some debate about what type of engines will power future airplanes- lightweight turbines, turbocharged diesel, or both-there is little debate about how these power plants will be controlled. Pilots of the future can look forward to more and better onboard safety equipment.

CAT/2003(RC)

Question. 102

Why might radars not be used a decade from now?

Comprehension

Directions for the question: Read the passage carefully and answer the given questions accordingly.

The invention of the gas turbine by Frank Whittle in England and Hans von Ohain in Germany in 1939 signaled the beginning of jet transport. Although the French engineer Lorin had visualized the concept of jet propulsion more than 25 years earlier, it took improved materials and the genius of Whittle and von Ohain to recognize the advantages that a gas turbine offered over a piston engine, including speeds in excess of 350 miles per hour. The progress from the first flight of a liquid propellant rocket and jet-propelled aircraft in 1939 to the first faster-than sound (supersonic) manned airplane (the Bell X-1) in 1947 happened in less than a decade. This then led very rapidly to a series of supersonic fighters and bombers, the first of which became operational in the 1950s. World War II technology foundation and emerging Cold War imperatives then led us into space with the launch of sputnik in 1957 and the placing of the first man on the moon only 12 years later - a mere 24 years after the end of World War II.

Now, a hypersonic flight can take you anywhere in the planet in less than four hours. British Royal Air Force and Royal Navy, and the air forces of several other countries are going to use a single-engine cousin to the F/A-22 called the F-35 joint strike fighter. These planes exhibit stealthy angles and coating that make it difficult for radar to detect them, among aviation’s most cutting-edge advances in design. The V-22, known as tilt-rotor, part helicopter, part airplane, take off vertically, then tilts its engine forward for winged flight. It provides speed, three times, the payload, five times the range of the helicopters it’s meant to replace. The new fighter, F/A-22 Raptor, with more than a million parts, shows a perfect amalgamation of stealth, speed, avionics, and agility.

It seems conventional forms, like the predator and Global Hawk, are passe, the stealthier unmanned aerial vehicles (UAVs) are in. They are shaped like kites, bits, and boomerang, all but invisible to the enemy radar and able to remain over hostile territory without any fear of getting grilled if shot down. Will the UAVs take away pilots’ jobs permanently? Can a computer-operated machine take a smarter and faster decision in a war-like situation? The new free-flight concept will probably supplement the existing air traffic control system by computers on each plane to map the altitude, route, weather, and other planes; and a decade from now, there will be no use of radar anymore.

How much bigger can the airplanes get? In the ‘50s they got speed, in the ‘80s they became stealthy. Now, they are getting smarter thanks to computer automation. The change is quite huge: from the four-seater to the A380 airplane. It seems we are now trading speed for size as we build a new superjumbo jet, the 555 seater A380, which will fly at almost the same speed of the Boeing 707, introduced half a century ago, but with an improved capacity, range, greater fuel economy. A few years down the line will come the truly larger model, to be known as 747X. In the beginning of 2005, the A380, the world’s first fully double-decked superjumbo passenger jet, weighing 1.2 million pounds, may carry a load of about 840 passengers.

Barring the early phase, civil aviation has always lagged behind military technologies (of jet engines, lightweight composite material, etc.). There are two fundamental factors behind the decline in commercial aeronautics in comparison to military aeronautics. There is no collective vision of our future such as the one that drove us in the past. There is also a need for a more aggressive pool of airplane design talents to maintain an industry that continues to find a multibillion-dollar a year market for its product.

Can the history of aviation technology tell us something about the future of aeronautics? Have we reached a final state in our evolution to a mature technology in aeronautics? Are the challenges of coming out with the ‘better, cheaper, faster’ designs somehow inferior to those that are suited for ‘ faster, higher, further? Safety should improve greatly as a result of the forthcoming improvements in airframes, engines, and avionics. Sixty years from now, aircraft will recover on their own if the pilot loses control. Satellites are the key not only to GPS (global positioning system) navigation but also to in-flight communications, uplinked weather, and even in-flight email. Although there is some debate about what type of engines will power future airplanes- lightweight turbines, turbocharged diesel, or both-there is little debate about how these power plants will be controlled. Pilots of the future can look forward to more and better onboard safety equipment.

CAT/2003(RC)

Question. 103

According to the author, commercial aeronautics, in contrast to military aeronautics, has declined because, among other things,

Comprehension

Directions for Questions: Read the passage carefully and answer the given questions accordingly.

Cells are the ultimate multitaskers: they can switch on genes and carry out their orders, talk to each other, divide in two, and much more, all at the same time. But they couldn’t do any of these tricks without a power source to generate movement. The inside of a cell bustles with more traffic than Delhi roads, and, like all vehicles, the cell’s moving parts need engines. Physicists and biologists have looked “under the hood” of the cell - and laid out the nuts and bolts of molecular engines.

The ability of such engines to convert chemical energy into motion is the envy of nanotechnology researchers looking for ways to power molecule-sized devices. Medical researchers also want to understand how these engines work. Because these molecules are essential for cell division, scientists hope to shut down the rampant growth of cancer cells by deactivating certain motors. Improving motor-driven transport in nerve cells may also be helpful for treating diseases such as Alzheimer’s, Parkinson’s or ALS, also known as Lou Gehrig’s disease.

We wouldn’t make it far in life without motor proteins. Our muscles wouldn’t contract. We couldn’t grow, because the growth process requires cells to duplicate their machinery and pull the copies apart. And our genes would be silent without the services of messenger RNA, which carries genetic instructions over to the cell’s protein-making factories. The movements that make these cellular activities possible occur along a complex network of threadlike fibers, or polymers, along which bundles of molecules travel like trams. The engines that power the cell’s freight are three families of proteins, called myosin, kinesin and dynein. For fuel, these proteins burn molecules of ATP, which cells make when they break down the carbohydrates and fats from the foods we eat. The energy from burning ATP causes changes in the proteins’ shape that allow them to heave themselves along the polymer track. The results are impressive: In one second, these molecules can travel between 50 and 100 times their own diameter. If a car with a 5-foot-wide engine were as efficient, it would travel 170 to 340 kmph.

Ronald Vale, a researcher at the Howard Hughes Medical Institute and the University of California at San Francisco, and Ronald Milligan of the Scripps Research Institute have realized a long-awaited goal by reconstructing the process by which myosin and kinesin move, almost down to the atom. The dynein motor, on the other hand, is still poorly understood. Myosin molecules, best known for their role in muscle contraction, form chains that lie between filaments of another protein called actin. Each myosin molecule has a tiny head that pokes out from the chain like oars from a canoe. Just as rowers propel their boat by stroking their oars through the water, the myosin molecules stick their heads into the actin and hoist themselves forward along the filament. While myosin moves along in short strokes, its cousin kinesin walks steadily along a different type of filament called a microtubule. Instead of using a projecting head as a lever, kinesin walks on two “legs”. Based on these differences, researchers used to think that myosin and kinesin were virtually unrelated. But newly discovered similarities in the motors’ ATP-processing machinery now suggest that they share a common ancestor - molecule. At this point, scientists can only speculate as to what type of primitive cell-like structure this ancestor occupied as it learned to burn ATP and use the energy to change shape. “We’ll never really know, because we can’t dig up the remains of ancient proteins, but that was probably a big evolutionary leap”, says Vale.

On a slightly larger scale, loner cells like sperm or infectious bacteria are prime movers that resolutely push their way through to other cells. As L. Mahadevan and Paul Matsudaira of the Massachusetts Institute of Technology explain, the engines in this case are springs or ratchets that are clusters of molecules, rather than single proteins like myosin and kinesin. Researchers don’t yet fully understand these engines’ fueling process or the details of how they move, but the result is a force to be reckoned with. For example, one such engine is a springlike stalk connecting a single-celled organism called a vorticellid to the leaf fragment it calls home. When exposed to calcium, the spring contracts, yanking the vorticellid down at speeds approaching 3 inches (8 centimeters) per second.

Springs like this are coiled bundles of filaments that expand or contract in response to chemical cues. A wave of positively charged calcium ions, for example, neutralizes the negative charges that keep the filaments extended. Some sperm use springlike engines made of actin filaments to shoot out a barb that penetrates the layers that surround an egg. And certain viruses use a similar apparatus to shoot their DNA into the host’s cell. Ratchets are also useful for moving whole cells, including some other sperm and pathogens. These engines are filaments that simply grow at one end, attracting chemical building blocks from nearby. Because the other end is anchored in place, the growing end pushes against any barrier that gets in its way.

Both springs and ratchets are made up of small units that each move just slightly, but collectively produce a powerful movement. Ultimately, Mahadevan and Matsudaira hope to better understand just how these particles create an effect that seems to be so much more than the sum of its parts. Might such an understanding provide inspiration for ways to power artificial nano-sized devices in the future? “The short answer is absolute,” says Mahadevan. “Biology has had a lot more time to evolve enormous richness in design for different organisms. Hopefully, studying these structures will not only improve our understanding of the biological world, but it will also enable us to copy them, take apart their components and re-create them for other purposes.”

CAT/2002(RC)

Question. 104

According to the author, research on the power source of movement in cells can contribute to

Comprehension

Directions for Questions: Read the passage carefully and answer the given questions accordingly.

Cells are the ultimate multitaskers: they can switch on genes and carry out their orders, talk to each other, divide in two, and much more, all at the same time. But they couldn’t do any of these tricks without a power source to generate movement. The inside of a cell bustles with more traffic than Delhi roads, and, like all vehicles, the cell’s moving parts need engines. Physicists and biologists have looked “under the hood” of the cell - and laid out the nuts and bolts of molecular engines.

The ability of such engines to convert chemical energy into motion is the envy of nanotechnology researchers looking for ways to power molecule-sized devices. Medical researchers also want to understand how these engines work. Because these molecules are essential for cell division, scientists hope to shut down the rampant growth of cancer cells by deactivating certain motors. Improving motor-driven transport in nerve cells may also be helpful for treating diseases such as Alzheimer’s, Parkinson’s or ALS, also known as Lou Gehrig’s disease.

We wouldn’t make it far in life without motor proteins. Our muscles wouldn’t contract. We couldn’t grow, because the growth process requires cells to duplicate their machinery and pull the copies apart. And our genes would be silent without the services of messenger RNA, which carries genetic instructions over to the cell’s protein-making factories. The movements that make these cellular activities possible occur along a complex network of threadlike fibers, or polymers, along which bundles of molecules travel like trams. The engines that power the cell’s freight are three families of proteins, called myosin, kinesin and dynein. For fuel, these proteins burn molecules of ATP, which cells make when they break down the carbohydrates and fats from the foods we eat. The energy from burning ATP causes changes in the proteins’ shape that allow them to heave themselves along the polymer track. The results are impressive: In one second, these molecules can travel between 50 and 100 times their own diameter. If a car with a 5-foot-wide engine were as efficient, it would travel 170 to 340 kmph.

Ronald Vale, a researcher at the Howard Hughes Medical Institute and the University of California at San Francisco, and Ronald Milligan of the Scripps Research Institute have realized a long-awaited goal by reconstructing the process by which myosin and kinesin move, almost down to the atom. The dynein motor, on the other hand, is still poorly understood. Myosin molecules, best known for their role in muscle contraction, form chains that lie between filaments of another protein called actin. Each myosin molecule has a tiny head that pokes out from the chain like oars from a canoe. Just as rowers propel their boat by stroking their oars through the water, the myosin molecules stick their heads into the actin and hoist themselves forward along the filament. While myosin moves along in short strokes, its cousin kinesin walks steadily along a different type of filament called a microtubule. Instead of using a projecting head as a lever, kinesin walks on two “legs”. Based on these differences, researchers used to think that myosin and kinesin were virtually unrelated. But newly discovered similarities in the motors’ ATP-processing machinery now suggest that they share a common ancestor - molecule. At this point, scientists can only speculate as to what type of primitive cell-like structure this ancestor occupied as it learned to burn ATP and use the energy to change shape. “We’ll never really know, because we can’t dig up the remains of ancient proteins, but that was probably a big evolutionary leap”, says Vale.

On a slightly larger scale, loner cells like sperm or infectious bacteria are prime movers that resolutely push their way through to other cells. As L. Mahadevan and Paul Matsudaira of the Massachusetts Institute of Technology explain, the engines in this case are springs or ratchets that are clusters of molecules, rather than single proteins like myosin and kinesin. Researchers don’t yet fully understand these engines’ fueling process or the details of how they move, but the result is a force to be reckoned with. For example, one such engine is a springlike stalk connecting a single-celled organism called a vorticellid to the leaf fragment it calls home. When exposed to calcium, the spring contracts, yanking the vorticellid down at speeds approaching 3 inches (8 centimeters) per second.

Springs like this are coiled bundles of filaments that expand or contract in response to chemical cues. A wave of positively charged calcium ions, for example, neutralizes the negative charges that keep the filaments extended. Some sperm use springlike engines made of actin filaments to shoot out a barb that penetrates the layers that surround an egg. And certain viruses use a similar apparatus to shoot their DNA into the host’s cell. Ratchets are also useful for moving whole cells, including some other sperm and pathogens. These engines are filaments that simply grow at one end, attracting chemical building blocks from nearby. Because the other end is anchored in place, the growing end pushes against any barrier that gets in its way.

Both springs and ratchets are made up of small units that each move just slightly, but collectively produce a powerful movement. Ultimately, Mahadevan and Matsudaira hope to better understand just how these particles create an effect that seems to be so much more than the sum of its parts. Might such an understanding provide inspiration for ways to power artificial nano-sized devices in the future? “The short answer is absolute,” says Mahadevan. “Biology has had a lot more time to evolve enormous richness in design for different organisms. Hopefully, studying these structures will not only improve our understanding of the biological world, but it will also enable us to copy them, take apart their components and re-create them for other purposes.”

CAT/2002(RC)

Question. 105

The author has used several analogies to illustrate his arguments in the article. Which of the following pairs of words are examples of the analogies used?

(A) Cell activity and vehicular traffic

(B) Polymers and tram tracks

(C) Genes and canoes

(D) Vorticellids and ratchets

Comprehension

Directions for Questions: Read the passage carefully and answer the given questions accordingly.

Cells are the ultimate multitaskers: they can switch on genes and carry out their orders, talk to each other, divide in two, and much more, all at the same time. But they couldn’t do any of these tricks without a power source to generate movement. The inside of a cell bustles with more traffic than Delhi roads, and, like all vehicles, the cell’s moving parts need engines. Physicists and biologists have looked “under the hood” of the cell - and laid out the nuts and bolts of molecular engines.

The ability of such engines to convert chemical energy into motion is the envy of nanotechnology researchers looking for ways to power molecule-sized devices. Medical researchers also want to understand how these engines work. Because these molecules are essential for cell division, scientists hope to shut down the rampant growth of cancer cells by deactivating certain motors. Improving motor-driven transport in nerve cells may also be helpful for treating diseases such as Alzheimer’s, Parkinson’s or ALS, also known as Lou Gehrig’s disease.

We wouldn’t make it far in life without motor proteins. Our muscles wouldn’t contract. We couldn’t grow, because the growth process requires cells to duplicate their machinery and pull the copies apart. And our genes would be silent without the services of messenger RNA, which carries genetic instructions over to the cell’s protein-making factories. The movements that make these cellular activities possible occur along a complex network of threadlike fibers, or polymers, along which bundles of molecules travel like trams. The engines that power the cell’s freight are three families of proteins, called myosin, kinesin and dynein. For fuel, these proteins burn molecules of ATP, which cells make when they break down the carbohydrates and fats from the foods we eat. The energy from burning ATP causes changes in the proteins’ shape that allow them to heave themselves along the polymer track. The results are impressive: In one second, these molecules can travel between 50 and 100 times their own diameter. If a car with a 5-foot-wide engine were as efficient, it would travel 170 to 340 kmph.

Ronald Vale, a researcher at the Howard Hughes Medical Institute and the University of California at San Francisco, and Ronald Milligan of the Scripps Research Institute have realized a long-awaited goal by reconstructing the process by which myosin and kinesin move, almost down to the atom. The dynein motor, on the other hand, is still poorly understood. Myosin molecules, best known for their role in muscle contraction, form chains that lie between filaments of another protein called actin. Each myosin molecule has a tiny head that pokes out from the chain like oars from a canoe. Just as rowers propel their boat by stroking their oars through the water, the myosin molecules stick their heads into the actin and hoist themselves forward along the filament. While myosin moves along in short strokes, its cousin kinesin walks steadily along a different type of filament called a microtubule. Instead of using a projecting head as a lever, kinesin walks on two “legs”. Based on these differences, researchers used to think that myosin and kinesin were virtually unrelated. But newly discovered similarities in the motors’ ATP-processing machinery now suggest that they share a common ancestor - molecule. At this point, scientists can only speculate as to what type of primitive cell-like structure this ancestor occupied as it learned to burn ATP and use the energy to change shape. “We’ll never really know, because we can’t dig up the remains of ancient proteins, but that was probably a big evolutionary leap”, says Vale.

On a slightly larger scale, loner cells like sperm or infectious bacteria are prime movers that resolutely push their way through to other cells. As L. Mahadevan and Paul Matsudaira of the Massachusetts Institute of Technology explain, the engines in this case are springs or ratchets that are clusters of molecules, rather than single proteins like myosin and kinesin. Researchers don’t yet fully understand these engines’ fueling process or the details of how they move, but the result is a force to be reckoned with. For example, one such engine is a springlike stalk connecting a single-celled organism called a vorticellid to the leaf fragment it calls home. When exposed to calcium, the spring contracts, yanking the vorticellid down at speeds approaching 3 inches (8 centimeters) per second.

Springs like this are coiled bundles of filaments that expand or contract in response to chemical cues. A wave of positively charged calcium ions, for example, neutralizes the negative charges that keep the filaments extended. Some sperm use springlike engines made of actin filaments to shoot out a barb that penetrates the layers that surround an egg. And certain viruses use a similar apparatus to shoot their DNA into the host’s cell. Ratchets are also useful for moving whole cells, including some other sperm and pathogens. These engines are filaments that simply grow at one end, attracting chemical building blocks from nearby. Because the other end is anchored in place, the growing end pushes against any barrier that gets in its way.

Both springs and ratchets are made up of small units that each move just slightly, but collectively produce a powerful movement. Ultimately, Mahadevan and Matsudaira hope to better understand just how these particles create an effect that seems to be so much more than the sum of its parts. Might such an understanding provide inspiration for ways to power artificial nano-sized devices in the future? “The short answer is absolute,” says Mahadevan. “Biology has had a lot more time to evolve enormous richness in design for different organisms. Hopefully, studying these structures will not only improve our understanding of the biological world, but it will also enable us to copy them, take apart their components and re-create them for other purposes.”

CAT/2002(RC)

Question. 106

Read the five statements below: A, B, C, D, and E. From the options given, select the one which includes a statement that is not representative of an argument presented in the passage

(A) Sperms use spring-like engines made of the actin filament

(B) Myosin and kinesin are unrelated

(C) Nanotechnology researchers look for ways to power molecule-sized devices

(D) Motor proteins help muscle contraction

(E) The dynein motor is still poorly understood

Comprehension

Directions for Questions: Read the passage carefully and answer the given questions accordingly.

Cells are the ultimate multitaskers: they can switch on genes and carry out their orders, talk to each other, divide in two, and much more, all at the same time. But they couldn’t do any of these tricks without a power source to generate movement. The inside of a cell bustles with more traffic than Delhi roads, and, like all vehicles, the cell’s moving parts need engines. Physicists and biologists have looked “under the hood” of the cell - and laid out the nuts and bolts of molecular engines.

The ability of such engines to convert chemical energy into motion is the envy of nanotechnology researchers looking for ways to power molecule-sized devices. Medical researchers also want to understand how these engines work. Because these molecules are essential for cell division, scientists hope to shut down the rampant growth of cancer cells by deactivating certain motors. Improving motor-driven transport in nerve cells may also be helpful for treating diseases such as Alzheimer’s, Parkinson’s or ALS, also known as Lou Gehrig’s disease.

We wouldn’t make it far in life without motor proteins. Our muscles wouldn’t contract. We couldn’t grow, because the growth process requires cells to duplicate their machinery and pull the copies apart. And our genes would be silent without the services of messenger RNA, which carries genetic instructions over to the cell’s protein-making factories. The movements that make these cellular activities possible occur along a complex network of threadlike fibers, or polymers, along which bundles of molecules travel like trams. The engines that power the cell’s freight are three families of proteins, called myosin, kinesin and dynein. For fuel, these proteins burn molecules of ATP, which cells make when they break down the carbohydrates and fats from the foods we eat. The energy from burning ATP causes changes in the proteins’ shape that allow them to heave themselves along the polymer track. The results are impressive: In one second, these molecules can travel between 50 and 100 times their own diameter. If a car with a 5-foot-wide engine were as efficient, it would travel 170 to 340 kmph.

Ronald Vale, a researcher at the Howard Hughes Medical Institute and the University of California at San Francisco, and Ronald Milligan of the Scripps Research Institute have realized a long-awaited goal by reconstructing the process by which myosin and kinesin move, almost down to the atom. The dynein motor, on the other hand, is still poorly understood. Myosin molecules, best known for their role in muscle contraction, form chains that lie between filaments of another protein called actin. Each myosin molecule has a tiny head that pokes out from the chain like oars from a canoe. Just as rowers propel their boat by stroking their oars through the water, the myosin molecules stick their heads into the actin and hoist themselves forward along the filament. While myosin moves along in short strokes, its cousin kinesin walks steadily along a different type of filament called a microtubule. Instead of using a projecting head as a lever, kinesin walks on two “legs”. Based on these differences, researchers used to think that myosin and kinesin were virtually unrelated. But newly discovered similarities in the motors’ ATP-processing machinery now suggest that they share a common ancestor - molecule. At this point, scientists can only speculate as to what type of primitive cell-like structure this ancestor occupied as it learned to burn ATP and use the energy to change shape. “We’ll never really know, because we can’t dig up the remains of ancient proteins, but that was probably a big evolutionary leap”, says Vale.

On a slightly larger scale, loner cells like sperm or infectious bacteria are prime movers that resolutely push their way through to other cells. As L. Mahadevan and Paul Matsudaira of the Massachusetts Institute of Technology explain, the engines in this case are springs or ratchets that are clusters of molecules, rather than single proteins like myosin and kinesin. Researchers don’t yet fully understand these engines’ fueling process or the details of how they move, but the result is a force to be reckoned with. For example, one such engine is a springlike stalk connecting a single-celled organism called a vorticellid to the leaf fragment it calls home. When exposed to calcium, the spring contracts, yanking the vorticellid down at speeds approaching 3 inches (8 centimeters) per second.

Springs like this are coiled bundles of filaments that expand or contract in response to chemical cues. A wave of positively charged calcium ions, for example, neutralizes the negative charges that keep the filaments extended. Some sperm use springlike engines made of actin filaments to shoot out a barb that penetrates the layers that surround an egg. And certain viruses use a similar apparatus to shoot their DNA into the host’s cell. Ratchets are also useful for moving whole cells, including some other sperm and pathogens. These engines are filaments that simply grow at one end, attracting chemical building blocks from nearby. Because the other end is anchored in place, the growing end pushes against any barrier that gets in its way.

Both springs and ratchets are made up of small units that each move just slightly, but collectively produce a powerful movement. Ultimately, Mahadevan and Matsudaira hope to better understand just how these particles create an effect that seems to be so much more than the sum of its parts. Might such an understanding provide inspiration for ways to power artificial nano-sized devices in the future? “The short answer is absolute,” says Mahadevan. “Biology has had a lot more time to evolve enormous richness in design for different organisms. Hopefully, studying these structures will not only improve our understanding of the biological world, but it will also enable us to copy them, take apart their components and re-create them for other purposes.”

CAT/2002(RC)

Question. 107

Read the four statements below: A, B, C, and D. From the options given, select the one which includes only statement(s) that are representative of arguments presented in the passage.

(A) Protein motors help growth processes

(B) Improved transport in nerve cells will help arrest tuberculosis and cancer

(C) Cells, together, generate more power than the sum of power generated by them separately

(D) Vorticellid and the leaf fragment are connected by a calcium engine

Comprehension

Directions for Questions: Read the passage carefully and answer the given questions accordingly.

Cells are the ultimate multitaskers: they can switch on genes and carry out their orders, talk to each other, divide in two, and much more, all at the same time. But they couldn’t do any of these tricks without a power source to generate movement. The inside of a cell bustles with more traffic than Delhi roads, and, like all vehicles, the cell’s moving parts need engines. Physicists and biologists have looked “under the hood” of the cell - and laid out the nuts and bolts of molecular engines.

The ability of such engines to convert chemical energy into motion is the envy of nanotechnology researchers looking for ways to power molecule-sized devices. Medical researchers also want to understand how these engines work. Because these molecules are essential for cell division, scientists hope to shut down the rampant growth of cancer cells by deactivating certain motors. Improving motor-driven transport in nerve cells may also be helpful for treating diseases such as Alzheimer’s, Parkinson’s or ALS, also known as Lou Gehrig’s disease.

We wouldn’t make it far in life without motor proteins. Our muscles wouldn’t contract. We couldn’t grow, because the growth process requires cells to duplicate their machinery and pull the copies apart. And our genes would be silent without the services of messenger RNA, which carries genetic instructions over to the cell’s protein-making factories. The movements that make these cellular activities possible occur along a complex network of threadlike fibers, or polymers, along which bundles of molecules travel like trams. The engines that power the cell’s freight are three families of proteins, called myosin, kinesin and dynein. For fuel, these proteins burn molecules of ATP, which cells make when they break down the carbohydrates and fats from the foods we eat. The energy from burning ATP causes changes in the proteins’ shape that allow them to heave themselves along the polymer track. The results are impressive: In one second, these molecules can travel between 50 and 100 times their own diameter. If a car with a 5-foot-wide engine were as efficient, it would travel 170 to 340 kmph.

Ronald Vale, a researcher at the Howard Hughes Medical Institute and the University of California at San Francisco, and Ronald Milligan of the Scripps Research Institute have realized a long-awaited goal by reconstructing the process by which myosin and kinesin move, almost down to the atom. The dynein motor, on the other hand, is still poorly understood. Myosin molecules, best known for their role in muscle contraction, form chains that lie between filaments of another protein called actin. Each myosin molecule has a tiny head that pokes out from the chain like oars from a canoe. Just as rowers propel their boat by stroking their oars through the water, the myosin molecules stick their heads into the actin and hoist themselves forward along the filament. While myosin moves along in short strokes, its cousin kinesin walks steadily along a different type of filament called a microtubule. Instead of using a projecting head as a lever, kinesin walks on two “legs”. Based on these differences, researchers used to think that myosin and kinesin were virtually unrelated. But newly discovered similarities in the motors’ ATP-processing machinery now suggest that they share a common ancestor - molecule. At this point, scientists can only speculate as to what type of primitive cell-like structure this ancestor occupied as it learned to burn ATP and use the energy to change shape. “We’ll never really know, because we can’t dig up the remains of ancient proteins, but that was probably a big evolutionary leap”, says Vale.

On a slightly larger scale, loner cells like sperm or infectious bacteria are prime movers that resolutely push their way through to other cells. As L. Mahadevan and Paul Matsudaira of the Massachusetts Institute of Technology explain, the engines in this case are springs or ratchets that are clusters of molecules, rather than single proteins like myosin and kinesin. Researchers don’t yet fully understand these engines’ fueling process or the details of how they move, but the result is a force to be reckoned with. For example, one such engine is a springlike stalk connecting a single-celled organism called a vorticellid to the leaf fragment it calls home. When exposed to calcium, the spring contracts, yanking the vorticellid down at speeds approaching 3 inches (8 centimeters) per second.

Springs like this are coiled bundles of filaments that expand or contract in response to chemical cues. A wave of positively charged calcium ions, for example, neutralizes the negative charges that keep the filaments extended. Some sperm use springlike engines made of actin filaments to shoot out a barb that penetrates the layers that surround an egg. And certain viruses use a similar apparatus to shoot their DNA into the host’s cell. Ratchets are also useful for moving whole cells, including some other sperm and pathogens. These engines are filaments that simply grow at one end, attracting chemical building blocks from nearby. Because the other end is anchored in place, the growing end pushes against any barrier that gets in its way.

Both springs and ratchets are made up of small units that each move just slightly, but collectively produce a powerful movement. Ultimately, Mahadevan and Matsudaira hope to better understand just how these particles create an effect that seems to be so much more than the sum of its parts. Might such an understanding provide inspiration for ways to power artificial nano-sized devices in the future? “The short answer is absolute,” says Mahadevan. “Biology has had a lot more time to evolve enormous richness in design for different organisms. Hopefully, studying these structures will not only improve our understanding of the biological world, but it will also enable us to copy them, take apart their components and re-create them for other purposes.”

CAT/2002(RC)

Question. 108

Read the four statements below: A, B, C & D. From the options given, select the one which includes statement(s) that are representative of arguments presented in the passage.

(A) Myosin, kinesin, and actin are three types of protein

(B) Growth processes involve a routine in a cell that duplicates their machinery and pulls the copies apart

(C) Myosin molecules can generate vibrations in muscles

(D) Ronald and Mahadevan are researchers at the Massachusetts Institute of Technology 

Comprehension

Directions for Questions: Read the passage carefully and answer the given questions accordingly.

In the modern scientific story, the light was created not once but twice. The first time was in the Big Bang when the universe began its existence as a glowing, expanding, fireball, which cooled off into the darkness after a few million years. The second time was hundreds of millions of years later when the cold material condensed into dense nuggets under the influence of gravity and ignited to become the first stars.

Sir Martin Rees, Britain’s astronomer royal, named the long interval between these two enlightenments the cosmic “Dark Age”. The name describes not only the poorly lit conditions, but also the ignorance of astronomers about that period. Nobody knows exactly when the first stars formed, or how they organised themselves into galaxies - or even whether stars were the first luminous objects. They may have been preceded by quasars, which are mysterious, bright spots found at the centres of some galaxies.

Now, two independent groups of astronomers, one led by Robert Becker of University of California, Davis, and the other by George Djorgovski of the Caltech, claim to have peered far enough into space with their telescopes (and therefore backward enough in time) to observe the closing days of the Dark Age.

The main problem that plagued previous efforts to study the Dark Age was not the lack of suitable telescopes, but rather the lack of suitable things at which to point them. Because these events took place over 13 billion years ago if astronomers are to have any hope of unraveling them they must study objects that are at least 13 billion light-years away. The best prospects are quasars because they are so bright and compact that they can be seen across vast stretches of space. The energy source that powers a quasar is unknown, although it is suspected to be the intense gravity of a giant black hole. However, at the distances required for the study of the Dark Age, even quasars are extremely rare and faint.

Recently some members of Dr. Becker’s team announced their discovery of the four most distant quasars known. All the new quasars are terribly faint, a challenge that both teams overcame by peering at them through one of the twin Kech telescopes in Hawaii. These are the world’s largest, and can therefore collect the most light. The new work by Dr. Becker’s team analysed the light from all four quasars. Three of them appeared to be similar to ordinary, less distant quasars. However, the fourth and most distant, unlike any other quasar ever seen, showed unmistakable signs of being shrouded in a fog of hydrogen gas. This gas is leftover material from the Big Bang that did not condense into stars or quasars. It acts like fog because new-born stars and quasars emit mainly ultraviolet light, and hydrogen gas is opaque to ultraviolet. Seeing this fog had been the goal of would-be Dark Age astronomers since 1965 when James Gunn and Bruce Peterson spelled out the technique for using quasars as backlighting beacons to observe the fog’s ultraviolet shadow.

The fog prolonged the period of darkness until the heat from the first and quasars had the chance to ionize the hydrogen (breaking it into constituent parts, protons, and electrons). Ionized hydrogen is transparent to ultraviolet radiation, so at that moment the fog lifted and the universe became the well-lit place it is today. For this reason, the end of the Dark Age is called the “Epoch of Reionization”. Because the ultraviolet shadow is visible only in the most distant of the four quasars. Dr. Becker’s team concluded that the fog had dissipated completely by the time the universe was about 900 million years old, and one-seventh of its current size.

CAT/2001(RC)

Question. 109

In the passage, the Dark Age refers to

Comprehension

Directions for Questions: Read the passage carefully and answer the given questions accordingly.

In the modern scientific story, the light was created not once but twice. The first time was in the Big Bang when the universe began its existence as a glowing, expanding, fireball, which cooled off into the darkness after a few million years. The second time was hundreds of millions of years later when the cold material condensed into dense nuggets under the influence of gravity and ignited to become the first stars.

Sir Martin Rees, Britain’s astronomer royal, named the long interval between these two enlightenments the cosmic “Dark Age”. The name describes not only the poorly lit conditions, but also the ignorance of astronomers about that period. Nobody knows exactly when the first stars formed, or how they organised themselves into galaxies - or even whether stars were the first luminous objects. They may have been preceded by quasars, which are mysterious, bright spots found at the centres of some galaxies.

Now, two independent groups of astronomers, one led by Robert Becker of University of California, Davis, and the other by George Djorgovski of the Caltech, claim to have peered far enough into space with their telescopes (and therefore backward enough in time) to observe the closing days of the Dark Age.

The main problem that plagued previous efforts to study the Dark Age was not the lack of suitable telescopes, but rather the lack of suitable things at which to point them. Because these events took place over 13 billion years ago if astronomers are to have any hope of unraveling them they must study objects that are at least 13 billion light-years away. The best prospects are quasars because they are so bright and compact that they can be seen across vast stretches of space. The energy source that powers a quasar is unknown, although it is suspected to be the intense gravity of a giant black hole. However, at the distances required for the study of the Dark Age, even quasars are extremely rare and faint.

Recently some members of Dr. Becker’s team announced their discovery of the four most distant quasars known. All the new quasars are terribly faint, a challenge that both teams overcame by peering at them through one of the twin Kech telescopes in Hawaii. These are the world’s largest, and can therefore collect the most light. The new work by Dr. Becker’s team analysed the light from all four quasars. Three of them appeared to be similar to ordinary, less distant quasars. However, the fourth and most distant, unlike any other quasar ever seen, showed unmistakable signs of being shrouded in a fog of hydrogen gas. This gas is leftover material from the Big Bang that did not condense into stars or quasars. It acts like fog because new-born stars and quasars emit mainly ultraviolet light, and hydrogen gas is opaque to ultraviolet. Seeing this fog had been the goal of would-be Dark Age astronomers since 1965 when James Gunn and Bruce Peterson spelled out the technique for using quasars as backlighting beacons to observe the fog’s ultraviolet shadow.

The fog prolonged the period of darkness until the heat from the first and quasars had the chance to ionize the hydrogen (breaking it into constituent parts, protons, and electrons). Ionized hydrogen is transparent to ultraviolet radiation, so at that moment the fog lifted and the universe became the well-lit place it is today. For this reason, the end of the Dark Age is called the “Epoch of Reionization”. Because the ultraviolet shadow is visible only in the most distant of the four quasars. Dr. Becker’s team concluded that the fog had dissipated completely by the time the universe was about 900 million years old, and one-seventh of its current size.

CAT/2001(RC)

Question. 110

Astronomers find it difficult to study the Dark Age because

Comprehension

Directions for Questions: Read the passage carefully and answer the given questions accordingly.

In the modern scientific story, the light was created not once but twice. The first time was in the Big Bang when the universe began its existence as a glowing, expanding, fireball, which cooled off into the darkness after a few million years. The second time was hundreds of millions of years later when the cold material condensed into dense nuggets under the influence of gravity and ignited to become the first stars.

Sir Martin Rees, Britain’s astronomer royal, named the long interval between these two enlightenments the cosmic “Dark Age”. The name describes not only the poorly lit conditions, but also the ignorance of astronomers about that period. Nobody knows exactly when the first stars formed, or how they organised themselves into galaxies - or even whether stars were the first luminous objects. They may have been preceded by quasars, which are mysterious, bright spots found at the centres of some galaxies.

Now, two independent groups of astronomers, one led by Robert Becker of University of California, Davis, and the other by George Djorgovski of the Caltech, claim to have peered far enough into space with their telescopes (and therefore backward enough in time) to observe the closing days of the Dark Age.

The main problem that plagued previous efforts to study the Dark Age was not the lack of suitable telescopes, but rather the lack of suitable things at which to point them. Because these events took place over 13 billion years ago if astronomers are to have any hope of unraveling them they must study objects that are at least 13 billion light-years away. The best prospects are quasars because they are so bright and compact that they can be seen across vast stretches of space. The energy source that powers a quasar is unknown, although it is suspected to be the intense gravity of a giant black hole. However, at the distances required for the study of the Dark Age, even quasars are extremely rare and faint.

Recently some members of Dr. Becker’s team announced their discovery of the four most distant quasars known. All the new quasars are terribly faint, a challenge that both teams overcame by peering at them through one of the twin Kech telescopes in Hawaii. These are the world’s largest, and can therefore collect the most light. The new work by Dr. Becker’s team analysed the light from all four quasars. Three of them appeared to be similar to ordinary, less distant quasars. However, the fourth and most distant, unlike any other quasar ever seen, showed unmistakable signs of being shrouded in a fog of hydrogen gas. This gas is leftover material from the Big Bang that did not condense into stars or quasars. It acts like fog because new-born stars and quasars emit mainly ultraviolet light, and hydrogen gas is opaque to ultraviolet. Seeing this fog had been the goal of would-be Dark Age astronomers since 1965 when James Gunn and Bruce Peterson spelled out the technique for using quasars as backlighting beacons to observe the fog’s ultraviolet shadow.

The fog prolonged the period of darkness until the heat from the first and quasars had the chance to ionize the hydrogen (breaking it into constituent parts, protons, and electrons). Ionized hydrogen is transparent to ultraviolet radiation, so at that moment the fog lifted and the universe became the well-lit place it is today. For this reason, the end of the Dark Age is called the “Epoch of Reionization”. Because the ultraviolet shadow is visible only in the most distant of the four quasars. Dr. Becker’s team concluded that the fog had dissipated completely by the time the universe was about 900 million years old, and one-seventh of its current size.

CAT/2001(RC)

Question. 111

The four most distant quasars discovered recently

Comprehension

Directions for Questions: Read the passage carefully and answer the given questions accordingly.

In the modern scientific story, the light was created not once but twice. The first time was in the Big Bang when the universe began its existence as a glowing, expanding, fireball, which cooled off into the darkness after a few million years. The second time was hundreds of millions of years later when the cold material condensed into dense nuggets under the influence of gravity and ignited to become the first stars.

Sir Martin Rees, Britain’s astronomer royal, named the long interval between these two enlightenments the cosmic “Dark Age”. The name describes not only the poorly lit conditions, but also the ignorance of astronomers about that period. Nobody knows exactly when the first stars formed, or how they organised themselves into galaxies - or even whether stars were the first luminous objects. They may have been preceded by quasars, which are mysterious, bright spots found at the centres of some galaxies.

Now, two independent groups of astronomers, one led by Robert Becker of University of California, Davis, and the other by George Djorgovski of the Caltech, claim to have peered far enough into space with their telescopes (and therefore backward enough in time) to observe the closing days of the Dark Age.

The main problem that plagued previous efforts to study the Dark Age was not the lack of suitable telescopes, but rather the lack of suitable things at which to point them. Because these events took place over 13 billion years ago if astronomers are to have any hope of unraveling them they must study objects that are at least 13 billion light-years away. The best prospects are quasars because they are so bright and compact that they can be seen across vast stretches of space. The energy source that powers a quasar is unknown, although it is suspected to be the intense gravity of a giant black hole. However, at the distances required for the study of the Dark Age, even quasars are extremely rare and faint.

Recently some members of Dr. Becker’s team announced their discovery of the four most distant quasars known. All the new quasars are terribly faint, a challenge that both teams overcame by peering at them through one of the twin Kech telescopes in Hawaii. These are the world’s largest, and can therefore collect the most light. The new work by Dr. Becker’s team analysed the light from all four quasars. Three of them appeared to be similar to ordinary, less distant quasars. However, the fourth and most distant, unlike any other quasar ever seen, showed unmistakable signs of being shrouded in a fog of hydrogen gas. This gas is leftover material from the Big Bang that did not condense into stars or quasars. It acts like fog because new-born stars and quasars emit mainly ultraviolet light, and hydrogen gas is opaque to ultraviolet. Seeing this fog had been the goal of would-be Dark Age astronomers since 1965 when James Gunn and Bruce Peterson spelled out the technique for using quasars as backlighting beacons to observe the fog’s ultraviolet shadow.

The fog prolonged the period of darkness until the heat from the first and quasars had the chance to ionize the hydrogen (breaking it into constituent parts, protons, and electrons). Ionized hydrogen is transparent to ultraviolet radiation, so at that moment the fog lifted and the universe became the well-lit place it is today. For this reason, the end of the Dark Age is called the “Epoch of Reionization”. Because the ultraviolet shadow is visible only in the most distant of the four quasars. Dr. Becker’s team concluded that the fog had dissipated completely by the time the universe was about 900 million years old, and one-seventh of its current size.

CAT/2001(RC)

Question. 112

The fog of hydrogen gas seen through the telescopes

Comprehension

Directions for Questions: Read the passage carefully and answer the given questions accordingly.

The current debate on intellectual property rights (IPRs) raises a number of important issues concerning the strategy and politics for building a more dynamic national agricultural research system, the relative roles of public and private sectors, and the role of agribusiness multinational corporations(MNCs). This debate has been stimulated by the international agreement on Trade-Related Intellectual Property (TRIPs), negotiated as part of the Uruguay Round. TRIPs for the first time seeks to bring innovations in agricultural technology under a new worldwide IPR regime. The agribusiness MNCs (along with pharmaceutical companies) played a leading part in lobbying for such a regime during the Uruguay Round negotiations. The argument was that incentives are necessary to stimulate innovations and that this calls for a system of patents that gives innovators the sole right to use (or sell/lease the right to use) their innovations for a specified period and protects them against unauthorized copying or use. With the strong support of their national governments, they were influential in shaping the agreement on TRIPs which eventually emerged from the Uruguay Round.

The current debate on TRIPs in India-as indeed elsewhere-echoes wider concerns about ‘privatization’ of research and allowing a free field for MNCs in the sphere of biotechnology and agriculture. The agribusiness corporations, and those with unbounded faith in the power of science to overcome all likely problems, point to the vast potential that new technology holds for solving the problems of hunger, malnutrition, and poverty in the world. The exploitation of this potential should be encouraged and this is best done by the private sector for which patents are essential. Some, who do not necessarily accept this optimism argue, that fears of MNC domination are exaggerated and that farmers will accept their products only if they decisively outperform the available alternatives. Those who argue against agreeing to introduce an IPR regime in agriculture and encouraging private-sector research are apprehensive that this will work to the disadvantage of farmers by making them more dependent on monopolistic MNCs. A different, though related apprehension is that extensive use of hybrids and genetically engineered new varieties might increase the vulnerability of agriculture to outbreaks of pests and diseases. The larger, longer-term consequences of reduced biodiversity that may follow from the use of specially bred varieties are also another cause for concern. Moreover, corporations, driven by the profit motive, will necessarily tend to underplay, if not ignore, potential adverse consequences, especially those which are unknown and which may manifest themselves only over a relatively long period. On the other hand, high-pressure advertising and aggressive sales campaigns by private companies can seduce farmers into accepting varieties without being aware of potential adverse effects and the possibility of disastrous consequences for their livelihood if these varieties happen to fail. There is no provision under the laws, as they now exist, for compensating users against such eventualities.

Excessive preoccupation with seeds and seed material has obscured other important issues involved in reviewing the research policy. We need to remind ourselves that improved varieties by themselves are not sufficient for sustained growth of yields. In our own experience, some of the early high-yielding varieties (HYVs) of rice and wheat were found susceptible to widespread post attacks; and some had problems of grain quality. Further research was necessary to solve these problems. This largely successful research was almost entirely done in public research institutions. Of course, it could in principle have been done by private companies, but whether they choose to do so depends crucially on the extent of the loss in market for their original introductions on account of the above factors and whether the companies are financially strong enough to absorb the ‘losses’, invest in research to correct the deficiencies and recover the lost market. Public research, which is not driven by profit, is better placed to take corrective action. Research for improving common poll resource management, maintaining ecological health, and ensuring sustainability is both critical and also demanding in terms of technical challenge and resource requirements. As such research is crucial to the impact of new varieties, chemicals, and equipment in the farmer’s field, private companies should be interested in such research. But their primary interest is in the sale of seed materials, chemicals, equipment, and other inputs produced by them. Knowledge and techniques --- can only do such work.

The public sector must therefore continue to play a major role in the national research system. It is both wrong and misleading to pose the problem in terms of public sector versus private sector or of privatization of research. We need to address problems likely to arise on account of the public-private sector complementarily and ensure that the public research system performs efficiently. Complementarily between various elements of research raises several issues in implementing an IPR regime. Private companies do not produce new varieties and inputs entirely as a result of their own research. Almost all technological improvement is based on knowledge and experience accumulated from the past, and the results of basic and applied research in public and quasi-public institutions (universities, research organizations). Moreover, as is increasingly recognized, accumulated stock of knowledge does not reside only in the scientific community and its academic publications but is also widely diffused in traditions and folk knowledge of local communities all over.

The deciphering of the structure and functioning of DNA forms the basis of much of modern biotechnology. But this fundamental breakthrough is ‘public good’, freely accessible in the public domain and usable free of any charge. Varieties/techniques developed using that knowledge can however be, and are, patented for private profit. Similarly, private corporations draw extensively, and without any charge, on germplasm available in varieties of plant species (neem and turmeric are now famous examples). Publicly funded gene banks as well as new varieties bred by public sector research stations can also be used freely by private enterprises for developing their own varieties and seek patent protecting them. Should private breeders be allowed free use of basic scientific discoveries? Should the repositories of traditional knowledge and germplasm be collected which are maintained and improved by publicly funded institutions? Or should users be made to pay for such use? If they are to pay, what should be the basis of compensation? Should the compensations be for individuals or for communities/institutions to which they belong? Should individuals/institutions be given the right of patenting their innovations? These are some of the important issues that deserve more attention than they now get and need a serious detailed study to evolve reasonably satisfactory, fair, and workable solutions. Finally, the tendency to equate the public sector with the government is wrong. The public space is much wider than government departments and includes co-operatives, universities, public trusts, and a variety of non-governmental organizations (NGOs). Giving greater autonomy to research organizations from government control and giving non-government public institutions space and resources to play a larger, more effective role in research, is therefore an issue of direct relevance in restructuring the public research system.

 

CAT/2000(RC)

Question. 113

Which one of the following statements describes an important issue, or important issues, not being raised in the context of the current debate on IPRs?

Comprehension

Directions for Questions: Read the passage carefully and answer the given questions accordingly.

The current debate on intellectual property rights (IPRs) raises a number of important issues concerning the strategy and politics for building a more dynamic national agricultural research system, the relative roles of public and private sectors, and the role of agribusiness multinational corporations(MNCs). This debate has been stimulated by the international agreement on Trade-Related Intellectual Property (TRIPs), negotiated as part of the Uruguay Round. TRIPs for the first time seeks to bring innovations in agricultural technology under a new worldwide IPR regime. The agribusiness MNCs (along with pharmaceutical companies) played a leading part in lobbying for such a regime during the Uruguay Round negotiations. The argument was that incentives are necessary to stimulate innovations and that this calls for a system of patents that gives innovators the sole right to use (or sell/lease the right to use) their innovations for a specified period and protects them against unauthorized copying or use. With the strong support of their national governments, they were influential in shaping the agreement on TRIPs which eventually emerged from the Uruguay Round.

The current debate on TRIPs in India-as indeed elsewhere-echoes wider concerns about ‘privatization’ of research and allowing a free field for MNCs in the sphere of biotechnology and agriculture. The agribusiness corporations, and those with unbounded faith in the power of science to overcome all likely problems, point to the vast potential that new technology holds for solving the problems of hunger, malnutrition, and poverty in the world. The exploitation of this potential should be encouraged and this is best done by the private sector for which patents are essential. Some, who do not necessarily accept this optimism argue, that fears of MNC domination are exaggerated and that farmers will accept their products only if they decisively outperform the available alternatives. Those who argue against agreeing to introduce an IPR regime in agriculture and encouraging private-sector research are apprehensive that this will work to the disadvantage of farmers by making them more dependent on monopolistic MNCs. A different, though related apprehension is that extensive use of hybrids and genetically engineered new varieties might increase the vulnerability of agriculture to outbreaks of pests and diseases. The larger, longer-term consequences of reduced biodiversity that may follow from the use of specially bred varieties are also another cause for concern. Moreover, corporations, driven by the profit motive, will necessarily tend to underplay, if not ignore, potential adverse consequences, especially those which are unknown and which may manifest themselves only over a relatively long period. On the other hand, high-pressure advertising and aggressive sales campaigns by private companies can seduce farmers into accepting varieties without being aware of potential adverse effects and the possibility of disastrous consequences for their livelihood if these varieties happen to fail. There is no provision under the laws, as they now exist, for compensating users against such eventualities.

Excessive preoccupation with seeds and seed material has obscured other important issues involved in reviewing the research policy. We need to remind ourselves that improved varieties by themselves are not sufficient for sustained growth of yields. In our own experience, some of the early high-yielding varieties (HYVs) of rice and wheat were found susceptible to widespread post attacks; and some had problems of grain quality. Further research was necessary to solve these problems. This largely successful research was almost entirely done in public research institutions. Of course, it could in principle have been done by private companies, but whether they choose to do so depends crucially on the extent of the loss in market for their original introductions on account of the above factors and whether the companies are financially strong enough to absorb the ‘losses’, invest in research to correct the deficiencies and recover the lost market. Public research, which is not driven by profit, is better placed to take corrective action. Research for improving common poll resource management, maintaining ecological health, and ensuring sustainability is both critical and also demanding in terms of technical challenge and resource requirements. As such research is crucial to the impact of new varieties, chemicals, and equipment in the farmer’s field, private companies should be interested in such research. But their primary interest is in the sale of seed materials, chemicals, equipment, and other inputs produced by them. Knowledge and techniques --- can only do such work.

The public sector must therefore continue to play a major role in the national research system. It is both wrong and misleading to pose the problem in terms of public sector versus private sector or of privatization of research. We need to address problems likely to arise on account of the public-private sector complementarily and ensure that the public research system performs efficiently. Complementarily between various elements of research raises several issues in implementing an IPR regime. Private companies do not produce new varieties and inputs entirely as a result of their own research. Almost all technological improvement is based on knowledge and experience accumulated from the past, and the results of basic and applied research in public and quasi-public institutions (universities, research organizations). Moreover, as is increasingly recognized, accumulated stock of knowledge does not reside only in the scientific community and its academic publications but is also widely diffused in traditions and folk knowledge of local communities all over.

The deciphering of the structure and functioning of DNA forms the basis of much of modern biotechnology. But this fundamental breakthrough is ‘public good’, freely accessible in the public domain and usable free of any charge. Varieties/techniques developed using that knowledge can however be, and are, patented for private profit. Similarly, private corporations draw extensively, and without any charge, on germplasm available in varieties of plant species (neem and turmeric are now famous examples). Publicly funded gene banks as well as new varieties bred by public sector research stations can also be used freely by private enterprises for developing their own varieties and seek patent protecting them. Should private breeders be allowed free use of basic scientific discoveries? Should the repositories of traditional knowledge and germplasm be collected which are maintained and improved by publicly funded institutions? Or should users be made to pay for such use? If they are to pay, what should be the basis of compensation? Should the compensations be for individuals or for communities/institutions to which they belong? Should individuals/institutions be given the right of patenting their innovations? These are some of the important issues that deserve more attention than they now get and need a serious detailed study to evolve reasonably satisfactory, fair, and workable solutions. Finally, the tendency to equate the public sector with the government is wrong. The public space is much wider than government departments and includes co-operatives, universities, public trusts, and a variety of non-governmental organizations (NGOs). Giving greater autonomy to research organizations from government control and giving non-government public institutions space and resources to play a larger, more effective role in research, is therefore an issue of direct relevance in restructuring the public research system.

 

CAT/2000(RC)

Question. 114

The fundamental breakthrough in deciphering the structure and functioning of DNA has become a public good. This means that:

Comprehension

Directions for Questions: Read the passage carefully and answer the given questions accordingly.

The current debate on intellectual property rights (IPRs) raises a number of important issues concerning the strategy and politics for building a more dynamic national agricultural research system, the relative roles of public and private sectors, and the role of agribusiness multinational corporations(MNCs). This debate has been stimulated by the international agreement on Trade-Related Intellectual Property (TRIPs), negotiated as part of the Uruguay Round. TRIPs for the first time seeks to bring innovations in agricultural technology under a new worldwide IPR regime. The agribusiness MNCs (along with pharmaceutical companies) played a leading part in lobbying for such a regime during the Uruguay Round negotiations. The argument was that incentives are necessary to stimulate innovations and that this calls for a system of patents that gives innovators the sole right to use (or sell/lease the right to use) their innovations for a specified period and protects them against unauthorized copying or use. With the strong support of their national governments, they were influential in shaping the agreement on TRIPs which eventually emerged from the Uruguay Round.

The current debate on TRIPs in India-as indeed elsewhere-echoes wider concerns about ‘privatization’ of research and allowing a free field for MNCs in the sphere of biotechnology and agriculture. The agribusiness corporations, and those with unbounded faith in the power of science to overcome all likely problems, point to the vast potential that new technology holds for solving the problems of hunger, malnutrition, and poverty in the world. The exploitation of this potential should be encouraged and this is best done by the private sector for which patents are essential. Some, who do not necessarily accept this optimism argue, that fears of MNC domination are exaggerated and that farmers will accept their products only if they decisively outperform the available alternatives. Those who argue against agreeing to introduce an IPR regime in agriculture and encouraging private-sector research are apprehensive that this will work to the disadvantage of farmers by making them more dependent on monopolistic MNCs. A different, though related apprehension is that extensive use of hybrids and genetically engineered new varieties might increase the vulnerability of agriculture to outbreaks of pests and diseases. The larger, longer-term consequences of reduced biodiversity that may follow from the use of specially bred varieties are also another cause for concern. Moreover, corporations, driven by the profit motive, will necessarily tend to underplay, if not ignore, potential adverse consequences, especially those which are unknown and which may manifest themselves only over a relatively long period. On the other hand, high-pressure advertising and aggressive sales campaigns by private companies can seduce farmers into accepting varieties without being aware of potential adverse effects and the possibility of disastrous consequences for their livelihood if these varieties happen to fail. There is no provision under the laws, as they now exist, for compensating users against such eventualities.

Excessive preoccupation with seeds and seed material has obscured other important issues involved in reviewing the research policy. We need to remind ourselves that improved varieties by themselves are not sufficient for sustained growth of yields. In our own experience, some of the early high-yielding varieties (HYVs) of rice and wheat were found susceptible to widespread post attacks; and some had problems of grain quality. Further research was necessary to solve these problems. This largely successful research was almost entirely done in public research institutions. Of course, it could in principle have been done by private companies, but whether they choose to do so depends crucially on the extent of the loss in market for their original introductions on account of the above factors and whether the companies are financially strong enough to absorb the ‘losses’, invest in research to correct the deficiencies and recover the lost market. Public research, which is not driven by profit, is better placed to take corrective action. Research for improving common poll resource management, maintaining ecological health, and ensuring sustainability is both critical and also demanding in terms of technical challenge and resource requirements. As such research is crucial to the impact of new varieties, chemicals, and equipment in the farmer’s field, private companies should be interested in such research. But their primary interest is in the sale of seed materials, chemicals, equipment, and other inputs produced by them. Knowledge and techniques --- can only do such work.

The public sector must therefore continue to play a major role in the national research system. It is both wrong and misleading to pose the problem in terms of public sector versus private sector or of privatization of research. We need to address problems likely to arise on account of the public-private sector complementarily and ensure that the public research system performs efficiently. Complementarily between various elements of research raises several issues in implementing an IPR regime. Private companies do not produce new varieties and inputs entirely as a result of their own research. Almost all technological improvement is based on knowledge and experience accumulated from the past, and the results of basic and applied research in public and quasi-public institutions (universities, research organizations). Moreover, as is increasingly recognized, accumulated stock of knowledge does not reside only in the scientific community and its academic publications but is also widely diffused in traditions and folk knowledge of local communities all over.

The deciphering of the structure and functioning of DNA forms the basis of much of modern biotechnology. But this fundamental breakthrough is ‘public good’, freely accessible in the public domain and usable free of any charge. Varieties/techniques developed using that knowledge can however be, and are, patented for private profit. Similarly, private corporations draw extensively, and without any charge, on germplasm available in varieties of plant species (neem and turmeric are now famous examples). Publicly funded gene banks as well as new varieties bred by public sector research stations can also be used freely by private enterprises for developing their own varieties and seek patent protecting them. Should private breeders be allowed free use of basic scientific discoveries? Should the repositories of traditional knowledge and germplasm be collected which are maintained and improved by publicly funded institutions? Or should users be made to pay for such use? If they are to pay, what should be the basis of compensation? Should the compensations be for individuals or for communities/institutions to which they belong? Should individuals/institutions be given the right of patenting their innovations? These are some of the important issues that deserve more attention than they now get and need a serious detailed study to evolve reasonably satisfactory, fair, and workable solutions. Finally, the tendency to equate the public sector with the government is wrong. The public space is much wider than government departments and includes co-operatives, universities, public trusts, and a variety of non-governmental organizations (NGOs). Giving greater autonomy to research organizations from government control and giving non-government public institutions space and resources to play a larger, more effective role in research, is therefore an issue of direct relevance in restructuring the public research system.

 

CAT/2000(RC)

Question. 115

In debating the respective roles of the public and private sectors in the national research system, it is important to recognize:

Comprehension

Directions for Questions: Read the passage carefully and answer the given questions accordingly.

The current debate on intellectual property rights (IPRs) raises a number of important issues concerning the strategy and politics for building a more dynamic national agricultural research system, the relative roles of public and private sectors, and the role of agribusiness multinational corporations(MNCs). This debate has been stimulated by the international agreement on Trade-Related Intellectual Property (TRIPs), negotiated as part of the Uruguay Round. TRIPs for the first time seeks to bring innovations in agricultural technology under a new worldwide IPR regime. The agribusiness MNCs (along with pharmaceutical companies) played a leading part in lobbying for such a regime during the Uruguay Round negotiations. The argument was that incentives are necessary to stimulate innovations and that this calls for a system of patents that gives innovators the sole right to use (or sell/lease the right to use) their innovations for a specified period and protects them against unauthorized copying or use. With the strong support of their national governments, they were influential in shaping the agreement on TRIPs which eventually emerged from the Uruguay Round.

The current debate on TRIPs in India-as indeed elsewhere-echoes wider concerns about ‘privatization’ of research and allowing a free field for MNCs in the sphere of biotechnology and agriculture. The agribusiness corporations, and those with unbounded faith in the power of science to overcome all likely problems, point to the vast potential that new technology holds for solving the problems of hunger, malnutrition, and poverty in the world. The exploitation of this potential should be encouraged and this is best done by the private sector for which patents are essential. Some, who do not necessarily accept this optimism argue, that fears of MNC domination are exaggerated and that farmers will accept their products only if they decisively outperform the available alternatives. Those who argue against agreeing to introduce an IPR regime in agriculture and encouraging private-sector research are apprehensive that this will work to the disadvantage of farmers by making them more dependent on monopolistic MNCs. A different, though related apprehension is that extensive use of hybrids and genetically engineered new varieties might increase the vulnerability of agriculture to outbreaks of pests and diseases. The larger, longer-term consequences of reduced biodiversity that may follow from the use of specially bred varieties are also another cause for concern. Moreover, corporations, driven by the profit motive, will necessarily tend to underplay, if not ignore, potential adverse consequences, especially those which are unknown and which may manifest themselves only over a relatively long period. On the other hand, high-pressure advertising and aggressive sales campaigns by private companies can seduce farmers into accepting varieties without being aware of potential adverse effects and the possibility of disastrous consequences for their livelihood if these varieties happen to fail. There is no provision under the laws, as they now exist, for compensating users against such eventualities.

Excessive preoccupation with seeds and seed material has obscured other important issues involved in reviewing the research policy. We need to remind ourselves that improved varieties by themselves are not sufficient for sustained growth of yields. In our own experience, some of the early high-yielding varieties (HYVs) of rice and wheat were found susceptible to widespread post attacks; and some had problems of grain quality. Further research was necessary to solve these problems. This largely successful research was almost entirely done in public research institutions. Of course, it could in principle have been done by private companies, but whether they choose to do so depends crucially on the extent of the loss in market for their original introductions on account of the above factors and whether the companies are financially strong enough to absorb the ‘losses’, invest in research to correct the deficiencies and recover the lost market. Public research, which is not driven by profit, is better placed to take corrective action. Research for improving common poll resource management, maintaining ecological health, and ensuring sustainability is both critical and also demanding in terms of technical challenge and resource requirements. As such research is crucial to the impact of new varieties, chemicals, and equipment in the farmer’s field, private companies should be interested in such research. But their primary interest is in the sale of seed materials, chemicals, equipment, and other inputs produced by them. Knowledge and techniques --- can only do such work.

The public sector must therefore continue to play a major role in the national research system. It is both wrong and misleading to pose the problem in terms of public sector versus private sector or of privatization of research. We need to address problems likely to arise on account of the public-private sector complementarily and ensure that the public research system performs efficiently. Complementarily between various elements of research raises several issues in implementing an IPR regime. Private companies do not produce new varieties and inputs entirely as a result of their own research. Almost all technological improvement is based on knowledge and experience accumulated from the past, and the results of basic and applied research in public and quasi-public institutions (universities, research organizations). Moreover, as is increasingly recognized, accumulated stock of knowledge does not reside only in the scientific community and its academic publications but is also widely diffused in traditions and folk knowledge of local communities all over.

The deciphering of the structure and functioning of DNA forms the basis of much of modern biotechnology. But this fundamental breakthrough is ‘public good’, freely accessible in the public domain and usable free of any charge. Varieties/techniques developed using that knowledge can however be, and are, patented for private profit. Similarly, private corporations draw extensively, and without any charge, on germplasm available in varieties of plant species (neem and turmeric are now famous examples). Publicly funded gene banks as well as new varieties bred by public sector research stations can also be used freely by private enterprises for developing their own varieties and seek patent protecting them. Should private breeders be allowed free use of basic scientific discoveries? Should the repositories of traditional knowledge and germplasm be collected which are maintained and improved by publicly funded institutions? Or should users be made to pay for such use? If they are to pay, what should be the basis of compensation? Should the compensations be for individuals or for communities/institutions to which they belong? Should individuals/institutions be given the right of patenting their innovations? These are some of the important issues that deserve more attention than they now get and need a serious detailed study to evolve reasonably satisfactory, fair, and workable solutions. Finally, the tendency to equate the public sector with the government is wrong. The public space is much wider than government departments and includes co-operatives, universities, public trusts, and a variety of non-governmental organizations (NGOs). Giving greater autonomy to research organizations from government control and giving non-government public institutions space and resources to play a larger, more effective role in research, is therefore an issue of direct relevance in restructuring the public research system.

CAT/2000(RC)

Question. 116

Which one of the following may provide incentives to address the problem of potential adverse consequences of biotechnology?

Comprehension

Directions for Questions: Read the passage carefully and answer the given questions accordingly.

The current debate on intellectual property rights (IPRs) raises a number of important issues concerning the strategy and politics for building a more dynamic national agricultural research system, the relative roles of public and private sectors, and the role of agribusiness multinational corporations(MNCs). This debate has been stimulated by the international agreement on Trade-Related Intellectual Property (TRIPs), negotiated as part of the Uruguay Round. TRIPs for the first time seeks to bring innovations in agricultural technology under a new worldwide IPR regime. The agribusiness MNCs (along with pharmaceutical companies) played a leading part in lobbying for such a regime during the Uruguay Round negotiations. The argument was that incentives are necessary to stimulate innovations and that this calls for a system of patents that gives innovators the sole right to use (or sell/lease the right to use) their innovations for a specified period and protects them against unauthorized copying or use. With the strong support of their national governments, they were influential in shaping the agreement on TRIPs which eventually emerged from the Uruguay Round.

The current debate on TRIPs in India-as indeed elsewhere-echoes wider concerns about ‘privatization’ of research and allowing a free field for MNCs in the sphere of biotechnology and agriculture. The agribusiness corporations, and those with unbounded faith in the power of science to overcome all likely problems, point to the vast potential that new technology holds for solving the problems of hunger, malnutrition, and poverty in the world. The exploitation of this potential should be encouraged and this is best done by the private sector for which patents are essential. Some, who do not necessarily accept this optimism argue, that fears of MNC domination are exaggerated and that farmers will accept their products only if they decisively outperform the available alternatives. Those who argue against agreeing to introduce an IPR regime in agriculture and encouraging private-sector research are apprehensive that this will work to the disadvantage of farmers by making them more dependent on monopolistic MNCs. A different, though related apprehension is that extensive use of hybrids and genetically engineered new varieties might increase the vulnerability of agriculture to outbreaks of pests and diseases. The larger, longer-term consequences of reduced biodiversity that may follow from the use of specially bred varieties are also another cause for concern. Moreover, corporations, driven by the profit motive, will necessarily tend to underplay, if not ignore, potential adverse consequences, especially those which are unknown and which may manifest themselves only over a relatively long period. On the other hand, high-pressure advertising and aggressive sales campaigns by private companies can seduce farmers into accepting varieties without being aware of potential adverse effects and the possibility of disastrous consequences for their livelihood if these varieties happen to fail. There is no provision under the laws, as they now exist, for compensating users against such eventualities.

Excessive preoccupation with seeds and seed material has obscured other important issues involved in reviewing the research policy. We need to remind ourselves that improved varieties by themselves are not sufficient for sustained growth of yields. In our own experience, some of the early high-yielding varieties (HYVs) of rice and wheat were found susceptible to widespread post attacks; and some had problems of grain quality. Further research was necessary to solve these problems. This largely successful research was almost entirely done in public research institutions. Of course, it could in principle have been done by private companies, but whether they choose to do so depends crucially on the extent of the loss in market for their original introductions on account of the above factors and whether the companies are financially strong enough to absorb the ‘losses’, invest in research to correct the deficiencies and recover the lost market. Public research, which is not driven by profit, is better placed to take corrective action. Research for improving common poll resource management, maintaining ecological health, and ensuring sustainability is both critical and also demanding in terms of technical challenge and resource requirements. As such research is crucial to the impact of new varieties, chemicals, and equipment in the farmer’s field, private companies should be interested in such research. But their primary interest is in the sale of seed materials, chemicals, equipment, and other inputs produced by them. Knowledge and techniques --- can only do such work.

The public sector must therefore continue to play a major role in the national research system. It is both wrong and misleading to pose the problem in terms of public sector versus private sector or of privatization of research. We need to address problems likely to arise on account of the public-private sector complementarily and ensure that the public research system performs efficiently. Complementarily between various elements of research raises several issues in implementing an IPR regime. Private companies do not produce new varieties and inputs entirely as a result of their own research. Almost all technological improvement is based on knowledge and experience accumulated from the past, and the results of basic and applied research in public and quasi-public institutions (universities, research organizations). Moreover, as is increasingly recognized, accumulated stock of knowledge does not reside only in the scientific community and its academic publications but is also widely diffused in traditions and folk knowledge of local communities all over.

The deciphering of the structure and functioning of DNA forms the basis of much of modern biotechnology. But this fundamental breakthrough is ‘public good’, freely accessible in the public domain and usable free of any charge. Varieties/techniques developed using that knowledge can however be, and are, patented for private profit. Similarly, private corporations draw extensively, and without any charge, on germplasm available in varieties of plant species (neem and turmeric are now famous examples). Publicly funded gene banks as well as new varieties bred by public sector research stations can also be used freely by private enterprises for developing their own varieties and seek patent protecting them. Should private breeders be allowed free use of basic scientific discoveries? Should the repositories of traditional knowledge and germplasm be collected which are maintained and improved by publicly funded institutions? Or should users be made to pay for such use? If they are to pay, what should be the basis of compensation? Should the compensations be for individuals or for communities/institutions to which they belong? Should individuals/institutions be given the right of patenting their innovations? These are some of the important issues that deserve more attention than they now get and need a serious detailed study to evolve reasonably satisfactory, fair, and workable solutions. Finally, the tendency to equate the public sector with the government is wrong. The public space is much wider than government departments and includes co-operatives, universities, public trusts, and a variety of non-governmental organizations (NGOs). Giving greater autonomy to research organizations from government control and giving non-government public institutions space and resources to play a larger, more effective role in research, is therefore an issue of direct relevance in restructuring the public research system.

CAT/2000(RC)

Question. 117

Which of the following sentences is not a likely consequence of emerging technologies in agriculture?

Comprehension

Directions for Questions: Read the passage carefully and answer the given questions accordingly.

The current debate on intellectual property rights (IPRs) raises a number of important issues concerning the strategy and politics for building a more dynamic national agricultural research system, the relative roles of public and private sectors, and the role of agribusiness multinational corporations(MNCs). This debate has been stimulated by the international agreement on Trade-Related Intellectual Property (TRIPs), negotiated as part of the Uruguay Round. TRIPs for the first time seeks to bring innovations in agricultural technology under a new worldwide IPR regime. The agribusiness MNCs (along with pharmaceutical companies) played a leading part in lobbying for such a regime during the Uruguay Round negotiations. The argument was that incentives are necessary to stimulate innovations and that this calls for a system of patents that gives innovators the sole right to use (or sell/lease the right to use) their innovations for a specified period and protects them against unauthorized copying or use. With the strong support of their national governments, they were influential in shaping the agreement on TRIPs which eventually emerged from the Uruguay Round.

The current debate on TRIPs in India-as indeed elsewhere-echoes wider concerns about ‘privatization’ of research and allowing a free field for MNCs in the sphere of biotechnology and agriculture. The agribusiness corporations, and those with unbounded faith in the power of science to overcome all likely problems, point to the vast potential that new technology holds for solving the problems of hunger, malnutrition, and poverty in the world. The exploitation of this potential should be encouraged and this is best done by the private sector for which patents are essential. Some, who do not necessarily accept this optimism argue, that fears of MNC domination are exaggerated and that farmers will accept their products only if they decisively outperform the available alternatives. Those who argue against agreeing to introduce an IPR regime in agriculture and encouraging private-sector research are apprehensive that this will work to the disadvantage of farmers by making them more dependent on monopolistic MNCs. A different, though related apprehension is that extensive use of hybrids and genetically engineered new varieties might increase the vulnerability of agriculture to outbreaks of pests and diseases. The larger, longer-term consequences of reduced biodiversity that may follow from the use of specially bred varieties are also another cause for concern. Moreover, corporations, driven by the profit motive, will necessarily tend to underplay, if not ignore, potential adverse consequences, especially those which are unknown and which may manifest themselves only over a relatively long period. On the other hand, high-pressure advertising and aggressive sales campaigns by private companies can seduce farmers into accepting varieties without being aware of potential adverse effects and the possibility of disastrous consequences for their livelihood if these varieties happen to fail. There is no provision under the laws, as they now exist, for compensating users against such eventualities.

Excessive preoccupation with seeds and seed material has obscured other important issues involved in reviewing the research policy. We need to remind ourselves that improved varieties by themselves are not sufficient for sustained growth of yields. In our own experience, some of the early high-yielding varieties (HYVs) of rice and wheat were found susceptible to widespread post attacks; and some had problems of grain quality. Further research was necessary to solve these problems. This largely successful research was almost entirely done in public research institutions. Of course, it could in principle have been done by private companies, but whether they choose to do so depends crucially on the extent of the loss in market for their original introductions on account of the above factors and whether the companies are financially strong enough to absorb the ‘losses’, invest in research to correct the deficiencies and recover the lost market. Public research, which is not driven by profit, is better placed to take corrective action. Research for improving common poll resource management, maintaining ecological health, and ensuring sustainability is both critical and also demanding in terms of technical challenge and resource requirements. As such research is crucial to the impact of new varieties, chemicals, and equipment in the farmer’s field, private companies should be interested in such research. But their primary interest is in the sale of seed materials, chemicals, equipment, and other inputs produced by them. Knowledge and techniques --- can only do such work.

The public sector must therefore continue to play a major role in the national research system. It is both wrong and misleading to pose the problem in terms of public sector versus private sector or of privatization of research. We need to address problems likely to arise on account of the public-private sector complementarily and ensure that the public research system performs efficiently. Complementarily between various elements of research raises several issues in implementing an IPR regime. Private companies do not produce new varieties and inputs entirely as a result of their own research. Almost all technological improvement is based on knowledge and experience accumulated from the past, and the results of basic and applied research in public and quasi-public institutions (universities, research organizations). Moreover, as is increasingly recognized, accumulated stock of knowledge does not reside only in the scientific community and its academic publications but is also widely diffused in traditions and folk knowledge of local communities all over.

The deciphering of the structure and functioning of DNA forms the basis of much of modern biotechnology. But this fundamental breakthrough is ‘public good’, freely accessible in the public domain and usable free of any charge. Varieties/techniques developed using that knowledge can however be, and are, patented for private profit. Similarly, private corporations draw extensively, and without any charge, on germplasm available in varieties of plant species (neem and turmeric are now famous examples). Publicly funded gene banks as well as new varieties bred by public sector research stations can also be used freely by private enterprises for developing their own varieties and seek patent protecting them. Should private breeders be allowed free use of basic scientific discoveries? Should the repositories of traditional knowledge and germplasm be collected which are maintained and improved by publicly funded institutions? Or should users be made to pay for such use? If they are to pay, what should be the basis of compensation? Should the compensations be for individuals or for communities/institutions to which they belong? Should individuals/institutions be given the right of patenting their innovations? These are some of the important issues that deserve more attention than they now get and need a serious detailed study to evolve reasonably satisfactory, fair, and workable solutions. Finally, the tendency to equate the public sector with the government is wrong. The public space is much wider than government departments and includes co-operatives, universities, public trusts, and a variety of non-governmental organizations (NGOs). Giving greater autonomy to research organizations from government control and giving non-government public institutions space and resources to play a larger, more effective role in research, is therefore an issue of direct relevance in restructuring the public research system.

CAT/2000(RC)

Question. 118

The TRIPs agreement emerged from the Uruguay Round to

Comprehension

Directions for Questions: Read the passage carefully and answer the given questions accordingly.

The current debate on intellectual property rights (IPRs) raises a number of important issues concerning the strategy and politics for building a more dynamic national agricultural research system, the relative roles of public and private sectors, and the role of agribusiness multinational corporations(MNCs). This debate has been stimulated by the international agreement on Trade-Related Intellectual Property (TRIPs), negotiated as part of the Uruguay Round. TRIPs for the first time seeks to bring innovations in agricultural technology under a new worldwide IPR regime. The agribusiness MNCs (along with pharmaceutical companies) played a leading part in lobbying for such a regime during the Uruguay Round negotiations. The argument was that incentives are necessary to stimulate innovations and that this calls for a system of patents that gives innovators the sole right to use (or sell/lease the right to use) their innovations for a specified period and protects them against unauthorized copying or use. With the strong support of their national governments, they were influential in shaping the agreement on TRIPs which eventually emerged from the Uruguay Round.

The current debate on TRIPs in India-as indeed elsewhere-echoes wider concerns about ‘privatization’ of research and allowing a free field for MNCs in the sphere of biotechnology and agriculture. The agribusiness corporations, and those with unbounded faith in the power of science to overcome all likely problems, point to the vast potential that new technology holds for solving the problems of hunger, malnutrition, and poverty in the world. The exploitation of this potential should be encouraged and this is best done by the private sector for which patents are essential. Some, who do not necessarily accept this optimism argue, that fears of MNC domination are exaggerated and that farmers will accept their products only if they decisively outperform the available alternatives. Those who argue against agreeing to introduce an IPR regime in agriculture and encouraging private-sector research are apprehensive that this will work to the disadvantage of farmers by making them more dependent on monopolistic MNCs. A different, though related apprehension is that extensive use of hybrids and genetically engineered new varieties might increase the vulnerability of agriculture to outbreaks of pests and diseases. The larger, longer-term consequences of reduced biodiversity that may follow from the use of specially bred varieties are also another cause for concern. Moreover, corporations, driven by the profit motive, will necessarily tend to underplay, if not ignore, potential adverse consequences, especially those which are unknown and which may manifest themselves only over a relatively long period. On the other hand, high-pressure advertising and aggressive sales campaigns by private companies can seduce farmers into accepting varieties without being aware of potential adverse effects and the possibility of disastrous consequences for their livelihood if these varieties happen to fail. There is no provision under the laws, as they now exist, for compensating users against such eventualities.

Excessive preoccupation with seeds and seed material has obscured other important issues involved in reviewing the research policy. We need to remind ourselves that improved varieties by themselves are not sufficient for sustained growth of yields. In our own experience, some of the early high-yielding varieties (HYVs) of rice and wheat were found susceptible to widespread post attacks; and some had problems of grain quality. Further research was necessary to solve these problems. This largely successful research was almost entirely done in public research institutions. Of course, it could in principle have been done by private companies, but whether they choose to do so depends crucially on the extent of the loss in market for their original introductions on account of the above factors and whether the companies are financially strong enough to absorb the ‘losses’, invest in research to correct the deficiencies and recover the lost market. Public research, which is not driven by profit, is better placed to take corrective action. Research for improving common poll resource management, maintaining ecological health, and ensuring sustainability is both critical and also demanding in terms of technical challenge and resource requirements. As such research is crucial to the impact of new varieties, chemicals, and equipment in the farmer’s field, private companies should be interested in such research. But their primary interest is in the sale of seed materials, chemicals, equipment, and other inputs produced by them. Knowledge and techniques --- can only do such work.

The public sector must therefore continue to play a major role in the national research system. It is both wrong and misleading to pose the problem in terms of public sector versus private sector or of privatization of research. We need to address problems likely to arise on account of the public-private sector complementarily and ensure that the public research system performs efficiently. Complementarily between various elements of research raises several issues in implementing an IPR regime. Private companies do not produce new varieties and inputs entirely as a result of their own research. Almost all technological improvement is based on knowledge and experience accumulated from the past, and the results of basic and applied research in public and quasi-public institutions (universities, research organizations). Moreover, as is increasingly recognized, accumulated stock of knowledge does not reside only in the scientific community and its academic publications but is also widely diffused in traditions and folk knowledge of local communities all over.

The deciphering of the structure and functioning of DNA forms the basis of much of modern biotechnology. But this fundamental breakthrough is ‘public good’, freely accessible in the public domain and usable free of any charge. Varieties/techniques developed using that knowledge can however be, and are, patented for private profit. Similarly, private corporations draw extensively, and without any charge, on germplasm available in varieties of plant species (neem and turmeric are now famous examples). Publicly funded gene banks as well as new varieties bred by public sector research stations can also be used freely by private enterprises for developing their own varieties and seek patent protecting them. Should private breeders be allowed free use of basic scientific discoveries? Should the repositories of traditional knowledge and germplasm be collected which are maintained and improved by publicly funded institutions? Or should users be made to pay for such use? If they are to pay, what should be the basis of compensation? Should the compensations be for individuals or for communities/institutions to which they belong? Should individuals/institutions be given the right of patenting their innovations? These are some of the important issues that deserve more attention than they now get and need a serious detailed study to evolve reasonably satisfactory, fair, and workable solutions. Finally, the tendency to equate the public sector with the government is wrong. The public space is much wider than government departments and includes co-operatives, universities, public trusts, and a variety of non-governmental organizations (NGOs). Giving greater autonomy to research organizations from government control and giving non-government public institutions space and resources to play a larger, more effective role in research, is therefore an issue of direct relevance in restructuring the public research system.

CAT/2000(RC)

Question. 119

Public or quasi-public research institutions are more likely than private companies, to address the negative consequences of new technologies, because of which of the following reasons?

Comprehension

Directions for Questions: Read the passage carefully and answer the given questions accordingly.

The current debate on intellectual property rights (IPRs) raises a number of important issues concerning the strategy and politics for building a more dynamic national agricultural research system, the relative roles of public and private sectors, and the role of agribusiness multinational corporations(MNCs). This debate has been stimulated by the international agreement on Trade-Related Intellectual Property (TRIPs), negotiated as part of the Uruguay Round. TRIPs for the first time seeks to bring innovations in agricultural technology under a new worldwide IPR regime. The agribusiness MNCs (along with pharmaceutical companies) played a leading part in lobbying for such a regime during the Uruguay Round negotiations. The argument was that incentives are necessary to stimulate innovations and that this calls for a system of patents that gives innovators the sole right to use (or sell/lease the right to use) their innovations for a specified period and protects them against unauthorized copying or use. With the strong support of their national governments, they were influential in shaping the agreement on TRIPs which eventually emerged from the Uruguay Round.

The current debate on TRIPs in India-as indeed elsewhere-echoes wider concerns about ‘privatization’ of research and allowing a free field for MNCs in the sphere of biotechnology and agriculture. The agribusiness corporations, and those with unbounded faith in the power of science to overcome all likely problems, point to the vast potential that new technology holds for solving the problems of hunger, malnutrition, and poverty in the world. The exploitation of this potential should be encouraged and this is best done by the private sector for which patents are essential. Some, who do not necessarily accept this optimism argue, that fears of MNC domination are exaggerated and that farmers will accept their products only if they decisively outperform the available alternatives. Those who argue against agreeing to introduce an IPR regime in agriculture and encouraging private-sector research are apprehensive that this will work to the disadvantage of farmers by making them more dependent on monopolistic MNCs. A different, though related apprehension is that extensive use of hybrids and genetically engineered new varieties might increase the vulnerability of agriculture to outbreaks of pests and diseases. The larger, longer-term consequences of reduced biodiversity that may follow from the use of specially bred varieties are also another cause for concern. Moreover, corporations, driven by the profit motive, will necessarily tend to underplay, if not ignore, potential adverse consequences, especially those which are unknown and which may manifest themselves only over a relatively long period. On the other hand, high-pressure advertising and aggressive sales campaigns by private companies can seduce farmers into accepting varieties without being aware of potential adverse effects and the possibility of disastrous consequences for their livelihood if these varieties happen to fail. There is no provision under the laws, as they now exist, for compensating users against such eventualities.

Excessive preoccupation with seeds and seed material has obscured other important issues involved in reviewing the research policy. We need to remind ourselves that improved varieties by themselves are not sufficient for sustained growth of yields. In our own experience, some of the early high-yielding varieties (HYVs) of rice and wheat were found susceptible to widespread post attacks; and some had problems of grain quality. Further research was necessary to solve these problems. This largely successful research was almost entirely done in public research institutions. Of course, it could in principle have been done by private companies, but whether they choose to do so depends crucially on the extent of the loss in market for their original introductions on account of the above factors and whether the companies are financially strong enough to absorb the ‘losses’, invest in research to correct the deficiencies and recover the lost market. Public research, which is not driven by profit, is better placed to take corrective action. Research for improving common poll resource management, maintaining ecological health, and ensuring sustainability is both critical and also demanding in terms of technical challenge and resource requirements. As such research is crucial to the impact of new varieties, chemicals, and equipment in the farmer’s field, private companies should be interested in such research. But their primary interest is in the sale of seed materials, chemicals, equipment, and other inputs produced by them. Knowledge and techniques --- can only do such work.

The public sector must therefore continue to play a major role in the national research system. It is both wrong and misleading to pose the problem in terms of public sector versus private sector or of privatization of research. We need to address problems likely to arise on account of the public-private sector complementarily and ensure that the public research system performs efficiently. Complementarily between various elements of research raises several issues in implementing an IPR regime. Private companies do not produce new varieties and inputs entirely as a result of their own research. Almost all technological improvement is based on knowledge and experience accumulated from the past, and the results of basic and applied research in public and quasi-public institutions (universities, research organizations). Moreover, as is increasingly recognized, accumulated stock of knowledge does not reside only in the scientific community and its academic publications but is also widely diffused in traditions and folk knowledge of local communities all over.

The deciphering of the structure and functioning of DNA forms the basis of much of modern biotechnology. But this fundamental breakthrough is ‘public good’, freely accessible in the public domain and usable free of any charge. Varieties/techniques developed using that knowledge can however be, and are, patented for private profit. Similarly, private corporations draw extensively, and without any charge, on germplasm available in varieties of plant species (neem and turmeric are now famous examples). Publicly funded gene banks as well as new varieties bred by public sector research stations can also be used freely by private enterprises for developing their own varieties and seek patent protecting them. Should private breeders be allowed free use of basic scientific discoveries? Should the repositories of traditional knowledge and germplasm be collected which are maintained and improved by publicly funded institutions? Or should users be made to pay for such use? If they are to pay, what should be the basis of compensation? Should the compensations be for individuals or for communities/institutions to which they belong? Should individuals/institutions be given the right of patenting their innovations? These are some of the important issues that deserve more attention than they now get and need a serious detailed study to evolve reasonably satisfactory, fair, and workable solutions. Finally, the tendency to equate the public sector with the government is wrong. The public space is much wider than government departments and includes co-operatives, universities, public trusts, and a variety of non-governmental organizations (NGOs). Giving greater autonomy to research organizations from government control and giving non-government public institutions space and resources to play a larger, more effective role in research, is therefore an issue of direct relevance in restructuring the public research system.

CAT/2000(RC)

Question. 120

While developing a strategy and policies for building a more dynamic national agricultural research system, which one of the following statements needs to be considered?

Comprehension

Directions for Questions: Read the passage carefully and answer the given questions accordingly.

In a modern computer, electronic and magnetic storage technologies play complementary roles. Electronic memory chips are fast but volatile (their contents are lost when the computer is unplugged). Magnetic tapes and hard disks are slower but have the advantage that they are non-volatile so that they can be used to store software and documents even when the power is off.

In laboratories around the world, however, researchers are hoping to achieve the best of both worlds. They are trying to build magnetic memory chips that could be used in place of today’s electronic ones.

These magnetic memories would be non-volatile; but they would also be faster, would consume less power, and would be able to stand up to hazardous environments more easily. Such chips would have obvious applications in storage cards for digital cameras and music-plyers; they would enable handheld and laptop computers to boot up more quickly and to operate for longer; they would allow desktop computers to run faster; they would doubtless have military and space-faring advantages too. But although the theory behind them books solid; there are tricky practical problems and need to be overcome.

Two different approaches, based on different magnetic phenomena, are being pursued. The first, being investigated by Gary Prinz and his colleagues at the Naval Research Laboratory (NRL) in Washington, D.C., exploits the fact that the electrical resistance of some materials changes in the presence of a magnetic field-a phenomenon known as magneto-resistance. For some multi-layered materials this effect is particularly powerful and is, accordingly, called “giant” magneto-resistance (GMR). Since 1997, the exploitation of GMR has made cheap multi-gigabyte hard disks commonplace. The magnetic orientations of the magnetized spots on the surface of a spinning disk are detected by measuring the changes they induce in the resistance of a tiny sensor. This technique is so sensitive that it means the sports can be made smaller and packed closer together than was previously possible, thus increasing the capacity and reducing the size and cost of a disk drive.

Dr. Prinz and his colleagues are now exploiting the same phenomenon on the surface of memory chips rather than spinning disks. In a conventional memory chip, each binary digit (bit) of data is represented using a capacitor reservoir of electrical charge that is either empty of full ---to represent a zero or a one. In an NRL’s magnetic design, by contrast, each bit is stored in a magnetic element in the form of a vertical pillar of magnetizable material. A matrix of wires passing above and below the elements allows each to be magnetized, either clockwise, to represent zero or one. Another set of wires allows current to pass through any particular element. By measuring an element’s resistance you can determine its magnetic orientation, and hence whether it is storing a zero or a one. Since the elements retain their magnetic orientation even when the power is off, the result is Non-Volatile memory. Unlike the elements of electronic memory, a magnetic memory’s elements are not easily disrupted by radiation. And compared with electronic memories, whose capacitors need constant topping up, magnetic memories are simpler and consume less power. The NRL researchers plan to commercialize their device through a company called Non-Volatile Electronics, which recently began work on the necessary processing and fabrication techniques. But it will be some years before the first chips roll the production line.

Most attention in the field is focused on an alternative approach based on the magnetic tunnel - junctions (MTJs), which are being investigated by researchers at chip markets such an IBM, Motorola, Siemens, and Hewlett- Packard. IBM’s research team, led by Stuart Parkin, has already created a 500-element working prototype that operates at 20 times the speed of conventional memory chips and consumes 1% of the power. Each element consists of a sandwich of two layers of magnetizable material separated by a barrier of aluminum oxide just four or five atoms thick. The polarisation of a lower magnetizable layer is fixed in one direction, but that of the upper layer can be set (again, bypassing a current through a matrix of control wires ) either to the left or to the right, to store a zero or a one. The polarisations of the two layers are then in either the same or opposite directions.

Although the aluminum-oxide barrier is an electrical insulator, it is so thin that electrons are able to jump across it via a quantum-mechanical effect called tunneling. It turns out that such tunneling is easier when the two magnetic layers is polarised in the same direction than when they are polarised in opposite directions. So, by measuring the current that flows through the sandwich, it is possible to determine the alignment of the topmost layer, and hence whether it is storing a zero or a one.

To build a full-scale memory chip based on MTJ’s is however no easy matter. According to Paulo Freitas, an expert on-chip manufacturing at the Technical University of Lisbon, magnetic memory elements will have to become far smaller and more reliable than current prototypes if they are to compete with electronic memory. At the same time, they will have to be sensitive enough to respond when a neighboring element is changed. Despite these difficulties, the general consensus is that MTJ’s are the more promising ideas. Dr Parkin says his group evaluated the GMR approach and decided not to pursue it, despite the fact that IBM pioneered GMR in hard disks. . Dr. Prinz, however, contends that his plans will eventually offer higher storage densities and lower production costs.

Not content with shaking up the multi-dollar market for computer memory, some researchers have even more ambitious plans for magnetic computing. In a paper published last month in Science, Russell Cowburn and Mark Welland of Cambridge University outlined research that could form the basis of a magnetic microprocessor - a chip capable of manipulating (rather than merely storing) information magnetically. In place of conducting wires, a magnetic processor would have rows of magnetic dots, each of which could be polarised in one of the two directions. Individual bits of information would travel down the rows as magnetic pulses, changing the orientation of the dots as they went. Dr. Cowburn and Dr. Welland have demonstrated how a logic gate (the basis of a microprocessor) could work in such a scheme. In their experiment, they fed a signal in at one end of the chain of dots and used a second signal to control whether it propagated along the chain.

It is admittedly, a long way from a single logic gate to a full microprocessor, but this was true also when the transistor was first invented. Dr. Cowburn is now searching for backers to help commercialize the technology, says he believes it will be at least ten years before the first magnetic microprocessor is constructed. But other researchers in the field agree that such a chip is the next logical step Dr. Prinz says that once magnetic memory is stored out “the target is to go after the logic circuits.” Whether all magnetic computers will ever be able to compete with other contenders that are jostling to knock electronics off its perch-such as optical, biological, and quantum computing-remains to be seen. Dr. Cowbrun suggests that the future lies with hybrid machines that use different technologies. But computing with magnetism evidently has an attraction all its own.

CAT/2000(RC)

Question. 121

In developing magnetic memory chips to replace the electronic ones, two alternative research paths are being pursued. These are approaches based on

Comprehension

Directions for Questions: Read the passage carefully and answer the given questions accordingly.

In a modern computer, electronic and magnetic storage technologies play complementary roles. Electronic memory chips are fast but volatile (their contents are lost when the computer is unplugged). Magnetic tapes and hard disks are slower but have the advantage that they are non-volatile so that they can be used to store software and documents even when the power is off.

In laboratories around the world, however, researchers are hoping to achieve the best of both worlds. They are trying to build magnetic memory chips that could be used in place of today’s electronic ones.

These magnetic memories would be non-volatile; but they would also be faster, would consume less power, and would be able to stand up to hazardous environments more easily. Such chips would have obvious applications in storage cards for digital cameras and music-plyers; they would enable handheld and laptop computers to boot up more quickly and to operate for longer; they would allow desktop computers to run faster; they would doubtless have military and space-faring advantages too. But although the theory behind them books solid; there are tricky practical problems and need to be overcome.

Two different approaches, based on different magnetic phenomena, are being pursued. The first, being investigated by Gary Prinz and his colleagues at the Naval Research Laboratory (NRL) in Washington, D.C., exploits the fact that the electrical resistance of some materials changes in the presence of a magnetic field-a phenomenon known as magneto-resistance. For some multi-layered materials this effect is particularly powerful and is, accordingly, called “giant” magneto-resistance (GMR). Since 1997, the exploitation of GMR has made cheap multi-gigabyte hard disks commonplace. The magnetic orientations of the magnetized spots on the surface of a spinning disk are detected by measuring the changes they induce in the resistance of a tiny sensor. This technique is so sensitive that it means the sports can be made smaller and packed closer together than was previously possible, thus increasing the capacity and reducing the size and cost of a disk drive.

Dr. Prinz and his colleagues are now exploiting the same phenomenon on the surface of memory chips rather than spinning disks. In a conventional memory chip, each binary digit (bit) of data is represented using a capacitor reservoir of electrical charge that is either empty of full ---to represent a zero or a one. In an NRL’s magnetic design, by contrast, each bit is stored in a magnetic element in the form of a vertical pillar of magnetizable material. A matrix of wires passing above and below the elements allows each to be magnetized, either clockwise, to represent zero or one. Another set of wires allows current to pass through any particular element. By measuring an element’s resistance you can determine its magnetic orientation, and hence whether it is storing a zero or a one. Since the elements retain their magnetic orientation even when the power is off, the result is Non-Volatile memory. Unlike the elements of electronic memory, a magnetic memory’s elements are not easily disrupted by radiation. And compared with electronic memories, whose capacitors need constant topping up, magnetic memories are simpler and consume less power. The NRL researchers plan to commercialize their device through a company called Non-Volatile Electronics, which recently began work on the necessary processing and fabrication techniques. But it will be some years before the first chips roll the production line.

Most attention in the field is focused on an alternative approach based on the magnetic tunnel - junctions (MTJs), which are being investigated by researchers at chip markets such an IBM, Motorola, Siemens, and Hewlett- Packard. IBM’s research team, led by Stuart Parkin, has already created a 500-element working prototype that operates at 20 times the speed of conventional memory chips and consumes 1% of the power. Each element consists of a sandwich of two layers of magnetizable material separated by a barrier of aluminum oxide just four or five atoms thick. The polarisation of a lower magnetizable layer is fixed in one direction, but that of the upper layer can be set (again, bypassing a current through a matrix of control wires ) either to the left or to the right, to store a zero or a one. The polarisations of the two layers are then in either the same or opposite directions.

Although the aluminum-oxide barrier is an electrical insulator, it is so thin that electrons are able to jump across it via a quantum-mechanical effect called tunneling. It turns out that such tunneling is easier when the two magnetic layers is polarised in the same direction than when they are polarised in opposite directions. So, by measuring the current that flows through the sandwich, it is possible to determine the alignment of the topmost layer, and hence whether it is storing a zero or a one.

To build a full-scale memory chip based on MTJ’s is however no easy matter. According to Paulo Freitas, an expert on-chip manufacturing at the Technical University of Lisbon, magnetic memory elements will have to become far smaller and more reliable than current prototypes if they are to compete with electronic memory. At the same time, they will have to be sensitive enough to respond when a neighboring element is changed. Despite these difficulties, the general consensus is that MTJ’s are the more promising ideas. Dr Parkin says his group evaluated the GMR approach and decided not to pursue it, despite the fact that IBM pioneered GMR in hard disks. . Dr. Prinz, however, contends that his plans will eventually offer higher storage densities and lower production costs.

Not content with shaking up the multi-dollar market for computer memory, some researchers have even more ambitious plans for magnetic computing. In a paper published last month in Science, Russell Cowburn and Mark Welland of Cambridge University outlined research that could form the basis of a magnetic microprocessor - a chip capable of manipulating (rather than merely storing) information magnetically. In place of conducting wires, a magnetic processor would have rows of magnetic dots, each of which could be polarised in one of the two directions. Individual bits of information would travel down the rows as magnetic pulses, changing the orientation of the dots as they went. Dr. Cowburn and Dr. Welland have demonstrated how a logic gate (the basis of a microprocessor) could work in such a scheme. In their experiment, they fed a signal in at one end of the chain of dots and used a second signal to control whether it propagated along the chain.

It is admittedly, a long way from a single logic gate to a full microprocessor, but this was true also when the transistor was first invented. Dr. Cowburn is now searching for backers to help commercialize the technology, says he believes it will be at least ten years before the first magnetic microprocessor is constructed. But other researchers in the field agree that such a chip is the next logical step Dr. Prinz says that once magnetic memory is stored out “the target is to go after the logic circuits.” Whether all magnetic computers will ever be able to compete with other contenders that are jostling to knock electronics off its perch-such as optical, biological, and quantum computing-remains to be seen. Dr. Cowbrun suggests that the future lies with hybrid machines that use different technologies. But computing with magnetism evidently has an attraction all its own.

CAT/2000(RC)

Question. 122

A binary digit or bit is represented in the magneto-resistance based magnetic chip using:

Comprehension

Directions for Questions: Read the passage carefully and answer the given questions accordingly.

In a modern computer, electronic and magnetic storage technologies play complementary roles. Electronic memory chips are fast but volatile (their contents are lost when the computer is unplugged). Magnetic tapes and hard disks are slower but have the advantage that they are non-volatile so that they can be used to store software and documents even when the power is off.

In laboratories around the world, however, researchers are hoping to achieve the best of both worlds. They are trying to build magnetic memory chips that could be used in place of today’s electronic ones.

These magnetic memories would be non-volatile; but they would also be faster, would consume less power, and would be able to stand up to hazardous environments more easily. Such chips would have obvious applications in storage cards for digital cameras and music-plyers; they would enable handheld and laptop computers to boot up more quickly and to operate for longer; they would allow desktop computers to run faster; they would doubtless have military and space-faring advantages too. But although the theory behind them books solid; there are tricky practical problems and need to be overcome.

Two different approaches, based on different magnetic phenomena, are being pursued. The first, being investigated by Gary Prinz and his colleagues at the Naval Research Laboratory (NRL) in Washington, D.C., exploits the fact that the electrical resistance of some materials changes in the presence of a magnetic field-a phenomenon known as magneto-resistance. For some multi-layered materials this effect is particularly powerful and is, accordingly, called “giant” magneto-resistance (GMR). Since 1997, the exploitation of GMR has made cheap multi-gigabyte hard disks commonplace. The magnetic orientations of the magnetized spots on the surface of a spinning disk are detected by measuring the changes they induce in the resistance of a tiny sensor. This technique is so sensitive that it means the sports can be made smaller and packed closer together than was previously possible, thus increasing the capacity and reducing the size and cost of a disk drive.

Dr. Prinz and his colleagues are now exploiting the same phenomenon on the surface of memory chips rather than spinning disks. In a conventional memory chip, each binary digit (bit) of data is represented using a capacitor reservoir of electrical charge that is either empty of full ---to represent a zero or a one. In an NRL’s magnetic design, by contrast, each bit is stored in a magnetic element in the form of a vertical pillar of magnetizable material. A matrix of wires passing above and below the elements allows each to be magnetized, either clockwise, to represent zero or one. Another set of wires allows current to pass through any particular element. By measuring an element’s resistance you can determine its magnetic orientation, and hence whether it is storing a zero or a one. Since the elements retain their magnetic orientation even when the power is off, the result is Non-Volatile memory. Unlike the elements of electronic memory, a magnetic memory’s elements are not easily disrupted by radiation. And compared with electronic memories, whose capacitors need constant topping up, magnetic memories are simpler and consume less power. The NRL researchers plan to commercialize their device through a company called Non-Volatile Electronics, which recently began work on the necessary processing and fabrication techniques. But it will be some years before the first chips roll the production line.

Most attention in the field is focused on an alternative approach based on the magnetic tunnel - junctions (MTJs), which are being investigated by researchers at chip markets such an IBM, Motorola, Siemens, and Hewlett- Packard. IBM’s research team, led by Stuart Parkin, has already created a 500-element working prototype that operates at 20 times the speed of conventional memory chips and consumes 1% of the power. Each element consists of a sandwich of two layers of magnetizable material separated by a barrier of aluminum oxide just four or five atoms thick. The polarisation of a lower magnetizable layer is fixed in one direction, but that of the upper layer can be set (again, bypassing a current through a matrix of control wires ) either to the left or to the right, to store a zero or a one. The polarisations of the two layers are then in either the same or opposite directions.

Although the aluminum-oxide barrier is an electrical insulator, it is so thin that electrons are able to jump across it via a quantum-mechanical effect called tunneling. It turns out that such tunneling is easier when the two magnetic layers is polarised in the same direction than when they are polarised in opposite directions. So, by measuring the current that flows through the sandwich, it is possible to determine the alignment of the topmost layer, and hence whether it is storing a zero or a one.

To build a full-scale memory chip based on MTJ’s is however no easy matter. According to Paulo Freitas, an expert on-chip manufacturing at the Technical University of Lisbon, magnetic memory elements will have to become far smaller and more reliable than current prototypes if they are to compete with electronic memory. At the same time, they will have to be sensitive enough to respond when a neighboring element is changed. Despite these difficulties, the general consensus is that MTJ’s are the more promising ideas. Dr Parkin says his group evaluated the GMR approach and decided not to pursue it, despite the fact that IBM pioneered GMR in hard disks. . Dr. Prinz, however, contends that his plans will eventually offer higher storage densities and lower production costs.

Not content with shaking up the multi-dollar market for computer memory, some researchers have even more ambitious plans for magnetic computing. In a paper published last month in Science, Russell Cowburn and Mark Welland of Cambridge University outlined research that could form the basis of a magnetic microprocessor - a chip capable of manipulating (rather than merely storing) information magnetically. In place of conducting wires, a magnetic processor would have rows of magnetic dots, each of which could be polarised in one of the two directions. Individual bits of information would travel down the rows as magnetic pulses, changing the orientation of the dots as they went. Dr. Cowburn and Dr. Welland have demonstrated how a logic gate (the basis of a microprocessor) could work in such a scheme. In their experiment, they fed a signal in at one end of the chain of dots and used a second signal to control whether it propagated along the chain.

It is admittedly, a long way from a single logic gate to a full microprocessor, but this was true also when the transistor was first invented. Dr. Cowburn is now searching for backers to help commercialize the technology, says he believes it will be at least ten years before the first magnetic microprocessor is constructed. But other researchers in the field agree that such a chip is the next logical step Dr. Prinz says that once magnetic memory is stored out “the target is to go after the logic circuits.” Whether all magnetic computers will ever be able to compete with other contenders that are jostling to knock electronics off its perch-such as optical, biological, and quantum computing-remains to be seen. Dr. Cowbrun suggests that the future lies with hybrid machines that use different technologies. But computing with magnetism evidently has an attraction all its own.

CAT/2000(RC)

Question. 123

In the magnetic tunnel-junctions (MTJs) tunnelling is easier when:

Comprehension

Directions for Questions: Read the passage carefully and answer the given questions accordingly.

In a modern computer, electronic and magnetic storage technologies play complementary roles. Electronic memory chips are fast but volatile (their contents are lost when the computer is unplugged). Magnetic tapes and hard disks are slower but have the advantage that they are non-volatile so that they can be used to store software and documents even when the power is off.

In laboratories around the world, however, researchers are hoping to achieve the best of both worlds. They are trying to build magnetic memory chips that could be used in place of today’s electronic ones.

These magnetic memories would be non-volatile; but they would also be faster, would consume less power, and would be able to stand up to hazardous environments more easily. Such chips would have obvious applications in storage cards for digital cameras and music-plyers; they would enable handheld and laptop computers to boot up more quickly and to operate for longer; they would allow desktop computers to run faster; they would doubtless have military and space-faring advantages too. But although the theory behind them books solid; there are tricky practical problems and need to be overcome.

Two different approaches, based on different magnetic phenomena, are being pursued. The first, being investigated by Gary Prinz and his colleagues at the Naval Research Laboratory (NRL) in Washington, D.C., exploits the fact that the electrical resistance of some materials changes in the presence of a magnetic field-a phenomenon known as magneto-resistance. For some multi-layered materials this effect is particularly powerful and is, accordingly, called “giant” magneto-resistance (GMR). Since 1997, the exploitation of GMR has made cheap multi-gigabyte hard disks commonplace. The magnetic orientations of the magnetized spots on the surface of a spinning disk are detected by measuring the changes they induce in the resistance of a tiny sensor. This technique is so sensitive that it means the sports can be made smaller and packed closer together than was previously possible, thus increasing the capacity and reducing the size and cost of a disk drive.

Dr. Prinz and his colleagues are now exploiting the same phenomenon on the surface of memory chips rather than spinning disks. In a conventional memory chip, each binary digit (bit) of data is represented using a capacitor reservoir of electrical charge that is either empty of full ---to represent a zero or a one. In an NRL’s magnetic design, by contrast, each bit is stored in a magnetic element in the form of a vertical pillar of magnetizable material. A matrix of wires passing above and below the elements allows each to be magnetized, either clockwise, to represent zero or one. Another set of wires allows current to pass through any particular element. By measuring an element’s resistance you can determine its magnetic orientation, and hence whether it is storing a zero or a one. Since the elements retain their magnetic orientation even when the power is off, the result is Non-Volatile memory. Unlike the elements of electronic memory, a magnetic memory’s elements are not easily disrupted by radiation. And compared with electronic memories, whose capacitors need constant topping up, magnetic memories are simpler and consume less power. The NRL researchers plan to commercialize their device through a company called Non-Volatile Electronics, which recently began work on the necessary processing and fabrication techniques. But it will be some years before the first chips roll the production line.

Most attention in the field is focused on an alternative approach based on the magnetic tunnel - junctions (MTJs), which are being investigated by researchers at chip markets such an IBM, Motorola, Siemens, and Hewlett- Packard. IBM’s research team, led by Stuart Parkin, has already created a 500-element working prototype that operates at 20 times the speed of conventional memory chips and consumes 1% of the power. Each element consists of a sandwich of two layers of magnetizable material separated by a barrier of aluminum oxide just four or five atoms thick. The polarisation of a lower magnetizable layer is fixed in one direction, but that of the upper layer can be set (again, bypassing a current through a matrix of control wires ) either to the left or to the right, to store a zero or a one. The polarisations of the two layers are then in either the same or opposite directions.

Although the aluminum-oxide barrier is an electrical insulator, it is so thin that electrons are able to jump across it via a quantum-mechanical effect called tunneling. It turns out that such tunneling is easier when the two magnetic layers is polarised in the same direction than when they are polarised in opposite directions. So, by measuring the current that flows through the sandwich, it is possible to determine the alignment of the topmost layer, and hence whether it is storing a zero or a one.

To build a full-scale memory chip based on MTJ’s is however no easy matter. According to Paulo Freitas, an expert on-chip manufacturing at the Technical University of Lisbon, magnetic memory elements will have to become far smaller and more reliable than current prototypes if they are to compete with electronic memory. At the same time, they will have to be sensitive enough to respond when a neighboring element is changed. Despite these difficulties, the general consensus is that MTJ’s are the more promising ideas. Dr Parkin says his group evaluated the GMR approach and decided not to pursue it, despite the fact that IBM pioneered GMR in hard disks. . Dr. Prinz, however, contends that his plans will eventually offer higher storage densities and lower production costs.

Not content with shaking up the multi-dollar market for computer memory, some researchers have even more ambitious plans for magnetic computing. In a paper published last month in Science, Russell Cowburn and Mark Welland of Cambridge University outlined research that could form the basis of a magnetic microprocessor - a chip capable of manipulating (rather than merely storing) information magnetically. In place of conducting wires, a magnetic processor would have rows of magnetic dots, each of which could be polarised in one of the two directions. Individual bits of information would travel down the rows as magnetic pulses, changing the orientation of the dots as they went. Dr. Cowburn and Dr. Welland have demonstrated how a logic gate (the basis of a microprocessor) could work in such a scheme. In their experiment, they fed a signal in at one end of the chain of dots and used a second signal to control whether it propagated along the chain.

It is admittedly, a long way from a single logic gate to a full microprocessor, but this was true also when the transistor was first invented. Dr. Cowburn is now searching for backers to help commercialize the technology, says he believes it will be at least ten years before the first magnetic microprocessor is constructed. But other researchers in the field agree that such a chip is the next logical step Dr. Prinz says that once magnetic memory is stored out “the target is to go after the logic circuits.” Whether all magnetic computers will ever be able to compete with other contenders that are jostling to knock electronics off its perch-such as optical, biological, and quantum computing-remains to be seen. Dr. Cowbrun suggests that the future lies with hybrid machines that use different technologies. But computing with magnetism evidently has an attraction all its own.

CAT/2000(RC)

Question. 124

A major barrier on the way to build a full-scale memory chip based on MTJs is 

Comprehension

Directions for Questions: Read the passage carefully and answer the given questions accordingly.

In a modern computer, electronic and magnetic storage technologies play complementary roles. Electronic memory chips are fast but volatile (their contents are lost when the computer is unplugged). Magnetic tapes and hard disks are slower but have the advantage that they are non-volatile so that they can be used to store software and documents even when the power is off.

In laboratories around the world, however, researchers are hoping to achieve the best of both worlds. They are trying to build magnetic memory chips that could be used in place of today’s electronic ones.

These magnetic memories would be non-volatile; but they would also be faster, would consume less power, and would be able to stand up to hazardous environments more easily. Such chips would have obvious applications in storage cards for digital cameras and music-plyers; they would enable handheld and laptop computers to boot up more quickly and to operate for longer; they would allow desktop computers to run faster; they would doubtless have military and space-faring advantages too. But although the theory behind them books solid; there are tricky practical problems and need to be overcome.

Two different approaches, based on different magnetic phenomena, are being pursued. The first, being investigated by Gary Prinz and his colleagues at the Naval Research Laboratory (NRL) in Washington, D.C., exploits the fact that the electrical resistance of some materials changes in the presence of a magnetic field-a phenomenon known as magneto-resistance. For some multi-layered materials this effect is particularly powerful and is, accordingly, called “giant” magneto-resistance (GMR). Since 1997, the exploitation of GMR has made cheap multi-gigabyte hard disks commonplace. The magnetic orientations of the magnetized spots on the surface of a spinning disk are detected by measuring the changes they induce in the resistance of a tiny sensor. This technique is so sensitive that it means the sports can be made smaller and packed closer together than was previously possible, thus increasing the capacity and reducing the size and cost of a disk drive.

Dr. Prinz and his colleagues are now exploiting the same phenomenon on the surface of memory chips rather than spinning disks. In a conventional memory chip, each binary digit (bit) of data is represented using a capacitor reservoir of electrical charge that is either empty of full ---to represent a zero or a one. In an NRL’s magnetic design, by contrast, each bit is stored in a magnetic element in the form of a vertical pillar of magnetizable material. A matrix of wires passing above and below the elements allows each to be magnetized, either clockwise, to represent zero or one. Another set of wires allows current to pass through any particular element. By measuring an element’s resistance you can determine its magnetic orientation, and hence whether it is storing a zero or a one. Since the elements retain their magnetic orientation even when the power is off, the result is Non-Volatile memory. Unlike the elements of electronic memory, a magnetic memory’s elements are not easily disrupted by radiation. And compared with electronic memories, whose capacitors need constant topping up, magnetic memories are simpler and consume less power. The NRL researchers plan to commercialize their device through a company called Non-Volatile Electronics, which recently began work on the necessary processing and fabrication techniques. But it will be some years before the first chips roll the production line.

Most attention in the field is focused on an alternative approach based on the magnetic tunnel - junctions (MTJs), which are being investigated by researchers at chip markets such an IBM, Motorola, Siemens, and Hewlett- Packard. IBM’s research team, led by Stuart Parkin, has already created a 500-element working prototype that operates at 20 times the speed of conventional memory chips and consumes 1% of the power. Each element consists of a sandwich of two layers of magnetizable material separated by a barrier of aluminum oxide just four or five atoms thick. The polarisation of a lower magnetizable layer is fixed in one direction, but that of the upper layer can be set (again, bypassing a current through a matrix of control wires ) either to the left or to the right, to store a zero or a one. The polarisations of the two layers are then in either the same or opposite directions.

Although the aluminum-oxide barrier is an electrical insulator, it is so thin that electrons are able to jump across it via a quantum-mechanical effect called tunneling. It turns out that such tunneling is easier when the two magnetic layers is polarised in the same direction than when they are polarised in opposite directions. So, by measuring the current that flows through the sandwich, it is possible to determine the alignment of the topmost layer, and hence whether it is storing a zero or a one.

To build a full-scale memory chip based on MTJ’s is however no easy matter. According to Paulo Freitas, an expert on-chip manufacturing at the Technical University of Lisbon, magnetic memory elements will have to become far smaller and more reliable than current prototypes if they are to compete with electronic memory. At the same time, they will have to be sensitive enough to respond when a neighboring element is changed. Despite these difficulties, the general consensus is that MTJ’s are the more promising ideas. Dr Parkin says his group evaluated the GMR approach and decided not to pursue it, despite the fact that IBM pioneered GMR in hard disks. . Dr. Prinz, however, contends that his plans will eventually offer higher storage densities and lower production costs.

Not content with shaking up the multi-dollar market for computer memory, some researchers have even more ambitious plans for magnetic computing. In a paper published last month in Science, Russell Cowburn and Mark Welland of Cambridge University outlined research that could form the basis of a magnetic microprocessor - a chip capable of manipulating (rather than merely storing) information magnetically. In place of conducting wires, a magnetic processor would have rows of magnetic dots, each of which could be polarised in one of the two directions. Individual bits of information would travel down the rows as magnetic pulses, changing the orientation of the dots as they went. Dr. Cowburn and Dr. Welland have demonstrated how a logic gate (the basis of a microprocessor) could work in such a scheme. In their experiment, they fed a signal in at one end of the chain of dots and used a second signal to control whether it propagated along the chain.

It is admittedly, a long way from a single logic gate to a full microprocessor, but this was true also when the transistor was first invented. Dr. Cowburn is now searching for backers to help commercialize the technology, says he believes it will be at least ten years before the first magnetic microprocessor is constructed. But other researchers in the field agree that such a chip is the next logical step Dr. Prinz says that once magnetic memory is stored out “the target is to go after the logic circuits.” Whether all magnetic computers will ever be able to compete with other contenders that are jostling to knock electronics off its perch-such as optical, biological, and quantum computing-remains to be seen. Dr. Cowbrun suggests that the future lies with hybrid machines that use different technologies. But computing with magnetism evidently has an attraction all its own.

CAT/2000(RC)

Question. 125

In the MTJs approach, it is possible to identify whether the topmost layer of the magnetized memory element is storing a zero or one by

Comprehension

Directions for Questions: Read the passage carefully and answer the given questions accordingly.

In a modern computer, electronic and magnetic storage technologies play complementary roles. Electronic memory chips are fast but volatile (their contents are lost when the computer is unplugged). Magnetic tapes and hard disks are slower but have the advantage that they are non-volatile so that they can be used to store software and documents even when the power is off.

In laboratories around the world, however, researchers are hoping to achieve the best of both worlds. They are trying to build magnetic memory chips that could be used in place of today’s electronic ones.

These magnetic memories would be non-volatile; but they would also be faster, would consume less power, and would be able to stand up to hazardous environments more easily. Such chips would have obvious applications in storage cards for digital cameras and music-plyers; they would enable handheld and laptop computers to boot up more quickly and to operate for longer; they would allow desktop computers to run faster; they would doubtless have military and space-faring advantages too. But although the theory behind them books solid; there are tricky practical problems and need to be overcome.

Two different approaches, based on different magnetic phenomena, are being pursued. The first, being investigated by Gary Prinz and his colleagues at the Naval Research Laboratory (NRL) in Washington, D.C., exploits the fact that the electrical resistance of some materials changes in the presence of a magnetic field-a phenomenon known as magneto-resistance. For some multi-layered materials this effect is particularly powerful and is, accordingly, called “giant” magneto-resistance (GMR). Since 1997, the exploitation of GMR has made cheap multi-gigabyte hard disks commonplace. The magnetic orientations of the magnetized spots on the surface of a spinning disk are detected by measuring the changes they induce in the resistance of a tiny sensor. This technique is so sensitive that it means the sports can be made smaller and packed closer together than was previously possible, thus increasing the capacity and reducing the size and cost of a disk drive.

Dr. Prinz and his colleagues are now exploiting the same phenomenon on the surface of memory chips rather than spinning disks. In a conventional memory chip, each binary digit (bit) of data is represented using a capacitor reservoir of electrical charge that is either empty of full ---to represent a zero or a one. In an NRL’s magnetic design, by contrast, each bit is stored in a magnetic element in the form of a vertical pillar of magnetizable material. A matrix of wires passing above and below the elements allows each to be magnetized, either clockwise, to represent zero or one. Another set of wires allows current to pass through any particular element. By measuring an element’s resistance you can determine its magnetic orientation, and hence whether it is storing a zero or a one. Since the elements retain their magnetic orientation even when the power is off, the result is Non-Volatile memory. Unlike the elements of electronic memory, a magnetic memory’s elements are not easily disrupted by radiation. And compared with electronic memories, whose capacitors need constant topping up, magnetic memories are simpler and consume less power. The NRL researchers plan to commercialize their device through a company called Non-Volatile Electronics, which recently began work on the necessary processing and fabrication techniques. But it will be some years before the first chips roll the production line.

Most attention in the field is focused on an alternative approach based on the magnetic tunnel - junctions (MTJs), which are being investigated by researchers at chip markets such an IBM, Motorola, Siemens, and Hewlett- Packard. IBM’s research team, led by Stuart Parkin, has already created a 500-element working prototype that operates at 20 times the speed of conventional memory chips and consumes 1% of the power. Each element consists of a sandwich of two layers of magnetizable material separated by a barrier of aluminum oxide just four or five atoms thick. The polarisation of a lower magnetizable layer is fixed in one direction, but that of the upper layer can be set (again, bypassing a current through a matrix of control wires ) either to the left or to the right, to store a zero or a one. The polarisations of the two layers are then in either the same or opposite directions.

Although the aluminum-oxide barrier is an electrical insulator, it is so thin that electrons are able to jump across it via a quantum-mechanical effect called tunneling. It turns out that such tunneling is easier when the two magnetic layers is polarised in the same direction than when they are polarised in opposite directions. So, by measuring the current that flows through the sandwich, it is possible to determine the alignment of the topmost layer, and hence whether it is storing a zero or a one.

To build a full-scale memory chip based on MTJ’s is however no easy matter. According to Paulo Freitas, an expert on-chip manufacturing at the Technical University of Lisbon, magnetic memory elements will have to become far smaller and more reliable than current prototypes if they are to compete with electronic memory. At the same time, they will have to be sensitive enough to respond when a neighboring element is changed. Despite these difficulties, the general consensus is that MTJ’s are the more promising ideas. Dr Parkin says his group evaluated the GMR approach and decided not to pursue it, despite the fact that IBM pioneered GMR in hard disks. . Dr. Prinz, however, contends that his plans will eventually offer higher storage densities and lower production costs.

Not content with shaking up the multi-dollar market for computer memory, some researchers have even more ambitious plans for magnetic computing. In a paper published last month in Science, Russell Cowburn and Mark Welland of Cambridge University outlined research that could form the basis of a magnetic microprocessor - a chip capable of manipulating (rather than merely storing) information magnetically. In place of conducting wires, a magnetic processor would have rows of magnetic dots, each of which could be polarised in one of the two directions. Individual bits of information would travel down the rows as magnetic pulses, changing the orientation of the dots as they went. Dr. Cowburn and Dr. Welland have demonstrated how a logic gate (the basis of a microprocessor) could work in such a scheme. In their experiment, they fed a signal in at one end of the chain of dots and used a second signal to control whether it propagated along the chain.

It is admittedly, a long way from a single logic gate to a full microprocessor, but this was true also when the transistor was first invented. Dr. Cowburn is now searching for backers to help commercialize the technology, says he believes it will be at least ten years before the first magnetic microprocessor is constructed. But other researchers in the field agree that such a chip is the next logical step Dr. Prinz says that once magnetic memory is stored out “the target is to go after the logic circuits.” Whether all magnetic computers will ever be able to compete with other contenders that are jostling to knock electronics off its perch-such as optical, biological, and quantum computing-remains to be seen. Dr. Cowbrun suggests that the future lies with hybrid machines that use different technologies. But computing with magnetism evidently has an attraction all its own.

CAT/2000(RC)

Question. 126

A line of research that is trying to build a magnetic chip that can both store and manipulates information is being pursued by:

Comprehension

Directions for Questions: Read the passage carefully and answer the given questions accordingly.

In a modern computer, electronic and magnetic storage technologies play complementary roles. Electronic memory chips are fast but volatile (their contents are lost when the computer is unplugged). Magnetic tapes and hard disks are slower but have the advantage that they are non-volatile so that they can be used to store software and documents even when the power is off.

In laboratories around the world, however, researchers are hoping to achieve the best of both worlds. They are trying to build magnetic memory chips that could be used in place of today’s electronic ones.

These magnetic memories would be non-volatile; but they would also be faster, would consume less power, and would be able to stand up to hazardous environments more easily. Such chips would have obvious applications in storage cards for digital cameras and music-plyers; they would enable handheld and laptop computers to boot up more quickly and to operate for longer; they would allow desktop computers to run faster; they would doubtless have military and space-faring advantages too. But although the theory behind them books solid; there are tricky practical problems and need to be overcome.

Two different approaches, based on different magnetic phenomena, are being pursued. The first, being investigated by Gary Prinz and his colleagues at the Naval Research Laboratory (NRL) in Washington, D.C., exploits the fact that the electrical resistance of some materials changes in the presence of a magnetic field-a phenomenon known as magneto-resistance. For some multi-layered materials this effect is particularly powerful and is, accordingly, called “giant” magneto-resistance (GMR). Since 1997, the exploitation of GMR has made cheap multi-gigabyte hard disks commonplace. The magnetic orientations of the magnetized spots on the surface of a spinning disk are detected by measuring the changes they induce in the resistance of a tiny sensor. This technique is so sensitive that it means the sports can be made smaller and packed closer together than was previously possible, thus increasing the capacity and reducing the size and cost of a disk drive.

Dr. Prinz and his colleagues are now exploiting the same phenomenon on the surface of memory chips rather than spinning disks. In a conventional memory chip, each binary digit (bit) of data is represented using a capacitor reservoir of electrical charge that is either empty of full ---to represent a zero or a one. In an NRL’s magnetic design, by contrast, each bit is stored in a magnetic element in the form of a vertical pillar of magnetizable material. A matrix of wires passing above and below the elements allows each to be magnetized, either clockwise, to represent zero or one. Another set of wires allows current to pass through any particular element. By measuring an element’s resistance you can determine its magnetic orientation, and hence whether it is storing a zero or a one. Since the elements retain their magnetic orientation even when the power is off, the result is Non-Volatile memory. Unlike the elements of electronic memory, a magnetic memory’s elements are not easily disrupted by radiation. And compared with electronic memories, whose capacitors need constant topping up, magnetic memories are simpler and consume less power. The NRL researchers plan to commercialize their device through a company called Non-Volatile Electronics, which recently began work on the necessary processing and fabrication techniques. But it will be some years before the first chips roll the production line.

Most attention in the field is focused on an alternative approach based on the magnetic tunnel - junctions (MTJs), which are being investigated by researchers at chip markets such an IBM, Motorola, Siemens, and Hewlett- Packard. IBM’s research team, led by Stuart Parkin, has already created a 500-element working prototype that operates at 20 times the speed of conventional memory chips and consumes 1% of the power. Each element consists of a sandwich of two layers of magnetizable material separated by a barrier of aluminum oxide just four or five atoms thick. The polarisation of a lower magnetizable layer is fixed in one direction, but that of the upper layer can be set (again, bypassing a current through a matrix of control wires ) either to the left or to the right, to store a zero or a one. The polarisations of the two layers are then in either the same or opposite directions.

Although the aluminum-oxide barrier is an electrical insulator, it is so thin that electrons are able to jump across it via a quantum-mechanical effect called tunneling. It turns out that such tunneling is easier when the two magnetic layers is polarised in the same direction than when they are polarised in opposite directions. So, by measuring the current that flows through the sandwich, it is possible to determine the alignment of the topmost layer, and hence whether it is storing a zero or a one.

To build a full-scale memory chip based on MTJ’s is however no easy matter. According to Paulo Freitas, an expert on-chip manufacturing at the Technical University of Lisbon, magnetic memory elements will have to become far smaller and more reliable than current prototypes if they are to compete with electronic memory. At the same time, they will have to be sensitive enough to respond when a neighboring element is changed. Despite these difficulties, the general consensus is that MTJ’s are the more promising ideas. Dr Parkin says his group evaluated the GMR approach and decided not to pursue it, despite the fact that IBM pioneered GMR in hard disks. . Dr. Prinz, however, contends that his plans will eventually offer higher storage densities and lower production costs.

Not content with shaking up the multi-dollar market for computer memory, some researchers have even more ambitious plans for magnetic computing. In a paper published last month in Science, Russell Cowburn and Mark Welland of Cambridge University outlined research that could form the basis of a magnetic microprocessor - a chip capable of manipulating (rather than merely storing) information magnetically. In place of conducting wires, a magnetic processor would have rows of magnetic dots, each of which could be polarised in one of the two directions. Individual bits of information would travel down the rows as magnetic pulses, changing the orientation of the dots as they went. Dr. Cowburn and Dr. Welland have demonstrated how a logic gate (the basis of a microprocessor) could work in such a scheme. In their experiment, they fed a signal in at one end of the chain of dots and used a second signal to control whether it propagated along the chain.

It is admittedly, a long way from a single logic gate to a full microprocessor, but this was true also when the transistor was first invented. Dr. Cowburn is now searching for backers to help commercialize the technology, says he believes it will be at least ten years before the first magnetic microprocessor is constructed. But other researchers in the field agree that such a chip is the next logical step Dr. Prinz says that once magnetic memory is stored out “the target is to go after the logic circuits.” Whether all magnetic computers will ever be able to compete with other contenders that are jostling to knock electronics off its perch-such as optical, biological, and quantum computing-remains to be seen. Dr. Cowbrun suggests that the future lies with hybrid machines that use different technologies. But computing with magnetism evidently has an attraction all its own.

CAT/2000(RC)

Question. 127

Experimental research currently underway, using rows of magnetic dots, each of which could be polarised in one of the two directions, has led to the demonstration of

Comprehension

Directions for Questions: Read the passage carefully and answer the given questions accordingly.

In a modern computer, electronic and magnetic storage technologies play complementary roles. Electronic memory chips are fast but volatile (their contents are lost when the computer is unplugged). Magnetic tapes and hard disks are slower but have the advantage that they are non-volatile so that they can be used to store software and documents even when the power is off.

In laboratories around the world, however, researchers are hoping to achieve the best of both worlds. They are trying to build magnetic memory chips that could be used in place of today’s electronic ones.

These magnetic memories would be non-volatile; but they would also be faster, would consume less power, and would be able to stand up to hazardous environments more easily. Such chips would have obvious applications in storage cards for digital cameras and music-plyers; they would enable handheld and laptop computers to boot up more quickly and to operate for longer; they would allow desktop computers to run faster; they would doubtless have military and space-faring advantages too. But although the theory behind them books solid; there are tricky practical problems and need to be overcome.

Two different approaches, based on different magnetic phenomena, are being pursued. The first, being investigated by Gary Prinz and his colleagues at the Naval Research Laboratory (NRL) in Washington, D.C., exploits the fact that the electrical resistance of some materials changes in the presence of a magnetic field-a phenomenon known as magneto-resistance. For some multi-layered materials this effect is particularly powerful and is, accordingly, called “giant” magneto-resistance (GMR). Since 1997, the exploitation of GMR has made cheap multi-gigabyte hard disks commonplace. The magnetic orientations of the magnetized spots on the surface of a spinning disk are detected by measuring the changes they induce in the resistance of a tiny sensor. This technique is so sensitive that it means the sports can be made smaller and packed closer together than was previously possible, thus increasing the capacity and reducing the size and cost of a disk drive.

Dr. Prinz and his colleagues are now exploiting the same phenomenon on the surface of memory chips rather than spinning disks. In a conventional memory chip, each binary digit (bit) of data is represented using a capacitor reservoir of electrical charge that is either empty of full ---to represent a zero or a one. In an NRL’s magnetic design, by contrast, each bit is stored in a magnetic element in the form of a vertical pillar of magnetizable material. A matrix of wires passing above and below the elements allows each to be magnetized, either clockwise, to represent zero or one. Another set of wires allows current to pass through any particular element. By measuring an element’s resistance you can determine its magnetic orientation, and hence whether it is storing a zero or a one. Since the elements retain their magnetic orientation even when the power is off, the result is Non-Volatile memory. Unlike the elements of electronic memory, a magnetic memory’s elements are not easily disrupted by radiation. And compared with electronic memories, whose capacitors need constant topping up, magnetic memories are simpler and consume less power. The NRL researchers plan to commercialize their device through a company called Non-Volatile Electronics, which recently began work on the necessary processing and fabrication techniques. But it will be some years before the first chips roll the production line.

Most attention in the field is focused on an alternative approach based on the magnetic tunnel - junctions (MTJs), which are being investigated by researchers at chip markets such an IBM, Motorola, Siemens, and Hewlett- Packard. IBM’s research team, led by Stuart Parkin, has already created a 500-element working prototype that operates at 20 times the speed of conventional memory chips and consumes 1% of the power. Each element consists of a sandwich of two layers of magnetizable material separated by a barrier of aluminum oxide just four or five atoms thick. The polarisation of a lower magnetizable layer is fixed in one direction, but that of the upper layer can be set (again, bypassing a current through a matrix of control wires ) either to the left or to the right, to store a zero or a one. The polarisations of the two layers are then in either the same or opposite directions.

Although the aluminum-oxide barrier is an electrical insulator, it is so thin that electrons are able to jump across it via a quantum-mechanical effect called tunneling. It turns out that such tunneling is easier when the two magnetic layers is polarised in the same direction than when they are polarised in opposite directions. So, by measuring the current that flows through the sandwich, it is possible to determine the alignment of the topmost layer, and hence whether it is storing a zero or a one.

To build a full-scale memory chip based on MTJ’s is however no easy matter. According to Paulo Freitas, an expert on-chip manufacturing at the Technical University of Lisbon, magnetic memory elements will have to become far smaller and more reliable than current prototypes if they are to compete with electronic memory. At the same time, they will have to be sensitive enough to respond when a neighboring element is changed. Despite these difficulties, the general consensus is that MTJ’s are the more promising ideas. Dr Parkin says his group evaluated the GMR approach and decided not to pursue it, despite the fact that IBM pioneered GMR in hard disks. . Dr. Prinz, however, contends that his plans will eventually offer higher storage densities and lower production costs.

Not content with shaking up the multi-dollar market for computer memory, some researchers have even more ambitious plans for magnetic computing. In a paper published last month in Science, Russell Cowburn and Mark Welland of Cambridge University outlined research that could form the basis of a magnetic microprocessor - a chip capable of manipulating (rather than merely storing) information magnetically. In place of conducting wires, a magnetic processor would have rows of magnetic dots, each of which could be polarised in one of the two directions. Individual bits of information would travel down the rows as magnetic pulses, changing the orientation of the dots as they went. Dr. Cowburn and Dr. Welland have demonstrated how a logic gate (the basis of a microprocessor) could work in such a scheme. In their experiment, they fed a signal in at one end of the chain of dots and used a second signal to control whether it propagated along the chain.

It is admittedly, a long way from a single logic gate to a full microprocessor, but this was true also when the transistor was first invented. Dr. Cowburn is now searching for backers to help commercialize the technology, says he believes it will be at least ten years before the first magnetic microprocessor is constructed. But other researchers in the field agree that such a chip is the next logical step Dr. Prinz says that once magnetic memory is stored out “the target is to go after the logic circuits.” Whether all magnetic computers will ever be able to compete with other contenders that are jostling to knock electronics off its perch-such as optical, biological, and quantum computing-remains to be seen. Dr. Cowbrun suggests that the future lies with hybrid machines that use different technologies. But computing with magnetism evidently has an attraction all its own.

CAT/2000(RC)

Question. 128

From the passage, which of the following cannot be inferred?

Comprehension

Directions for Questions: Read the passage carefully and answer the given questions accordingly.

From ancient times, men have believed that, under certain peculiar circumstances, life could arise spontaneously: from the ooze of rivers could eel and from the entrails of dead bulls, bees; worms from mud and maggots from dead meat. This belief was held by Aristotle, Newton, and Desecrates, among many others, and apparently the great William Harvey too. The weight of centuries gradually disintegrated men’s beliefs in the spontaneous origin of maggots and mice, but the doctrine of spontaneous generation clung Tenaciously to the question of bacterial origin.

In association with Buffon, the Irish Jesuit priest John Needham declared that he could bring about at will that creation of living microbes in heat-sterilized broths, and presumably, in propitiation, theorized that God did not create living things directly but bade the earth and water to bring them forth. In his Dictionaire Philosophique, Voltaire reflected that it was odd to read of Father Needham’s claim while atheists conversely ‘should deny a Creator yet attribute to themselves the power of creating eels. But worte Thomas Husley, ‘The great tragedy of science-the slaying of a beautiful hypothesis by an ugly fact-which is so constantly being enacted under the eyes of philosophers, was played almost immediately, for the benefit of Buffon and Needham.’

The Italian Abbe Spallanzani did an experiment. He showed that a broth sealed from the air while boiling never develops bacterial growths and hence never decomposes. To Needharm’s objection that Spallanzani had ruined his broths and the air above them by excessive boiling, the Abbe replied by breaking the seals of his flasks. Air rushed in and bacterial growth began but the essential conflict remained. Whatever Spaqllanzani had his followers did to remove seeds and contaminants was regarded by the spontaneous generations as damaging to the ‘vital force’ from whence comes new life.

Thus, doubt remained, and into the controversy came the Titanic figure of Louis Pasteur. Believing that a solution to this problem was essential to the development of his theories concerning the role of bacteria in nature, Pasteur freely acknowledged the possibility that living bacteria very well might be arising anew from inanimate matter. To him, the research problem was largely a technical one: to repeat the work of those who claimed to have observed spontaneous generation but to employ infinite care to discover and exclude every possible concealed portal of bacterial entry. For the one that contended that life did not enter from the outside, the proof had to go to the question of possible contamination, Pasteur worked logically. After prolonged boiling broth would ferment only when the air was admitted to it. Therefore, either air contained a factor necessary for the spontaneous generation of life or viable germs were borne in by the air and seeded in the sterile nutrient broth. Pasteur designed ingenious flasks whose long S-shaped necks could be left open. Air was traped in the sinuous glass tube. Broths boiled in these glass tubes remained sterile. When their necks were snapped to admit ordinary air, bacterial growth would then commence-but not in every case. An occasional flask would remain sterile presumably because the bacterial population of the air is unevenly distributed. The forces of spontaneous generation would not be so erratic. Continuous skepticism drove Pasteur almost to fanatical efforts to control the ingredients of his experiments to destroy the doubts of the most skeptical. He ranged from the mountain air of Montanvert, which he showed to be almost sterile, to those deep, clear wells whose waters had been rendered germ-free by slow filtration through sandy soil. The latter discovery led to the familiar porcelain filters of the bacteriology laboratory. With pores small enough to exclude bacteria, solutions allowed to percolate through them could be reliably sterilized.

The argument raged on and soon spilled beyond the boundaries of science to become a burning religious and philosophical question of the day. For many, Pasteur’s conclusions caused conflict because they seemed simultaneous to support the Biblical account of creation while denying a variety of other philosophical systems.

The public was soon caught up in the crossfire of a vigorous series of public lectures and demonstrations by leading exponents of both views, novelists, clergymen, their adjuncts, and friends. Perhaps the most famous of these evenings in the theatre - was Pasteur’s public with a great debate between Huxley and Bishop Wiberforce for the elegance of rhetoric-was Pasteur’s public lecture at the Sorbonne of 7 April 1864. Having shown his audience the swan-necked flasks containing sterile broths, he concluded, “And, therefore, gentlemen, I could point to that liquid and say to you, I have taken my drop of water from the immensity of creation, and I have taken it full of the elements appropriated to the development of inferior beings. And I wait, I watch, I question it - begging it to recommence for new the beautiful spectacle of the first creation. But it is dumb, dumb since these experiments were begun several years ago; it is dumb because I have kept it from the only thing man does not know how to produce: form the germs that float in the air, from Life, for Life is a germ and a germ is a Life. Never will the doctrine of spontaneous generation recover from the mortal blow of this simple experiment .” And it has not Today these same flasks stand immutable: they are still free of microbial life.

It is an interesting fact that despite the ringing declaration of Pasteur, the issue did not die completely. And although far from healthy, it is not yet dead. In his fascinating biography of Pasteur, Rene Dubos has traced the later developments which saw new technical progress and criticism, and new energetic figures in the breach of the battle such as Bastion, for, and the immortal Tyndall, against, the doctrine of spontaneous generation. There was also new ‘sorrow’ for Pasteur as he read years later, in 1877, the last jottings of the great physiologist Claude Bernard and saw in them the ‘mystical’ suggestion that yeast may arise from grape juice. Even at this late date, Pasteur was stirred to new experiments again to prove to the dead Bernard and his followers the correctness of his position.

It seems to me that spontaneous generation is not a possibility, but a completely reasonable possibility that should never be relinquished from scientific thought. Before men knew of bacteria, they accepted the doctrine of spontaneous generation as the ‘only reasonable alternative’ to a belief in supernatural creation. But today, as we look for satisfaction at the downfall of the spontaneous generation hypothesis, we must not forget that science has rationally concluded that life once did originate on earth by spontaneous generation. If was really Pasteur’s evidence against a spontaneous generation that for the first time brought the whole difficult question of the origin of life before the scientific world. . In the above controversy, what was unreasonable was the parade of men who claimed to have ‘proved’ or who resolutely ‘believed in’ spontaneous generation on the face of proof not that spontaneous generation that for the first time brought the whole difficult question of the origin of life before the scientific world. In the above controversy, what was unreasonable was the parade of men who claimed to have ‘proved’ or who resolutely ‘ believed in’ spontaneous generation on the face of proof not that spontaneous generation cannot occur-but that their work was shot thorugh with experimentaal error. The acceptabel evidence also makes it clear that spontaneous generation, if it does not occur, must obviously be a highley improbable even under present conditions. Logic tells us that science can only prove an event improbable : It can never prove it impossible - and Gamow has appropriately remarked that nobody is really certain what would happen if a hermetically sealed can were opened after a couple of million years. Modern science agrees that it was highly improbable for life to have arisen in the pre-Cambrian seas, but it concluded, nevertheless, that there it did occur. with this, I think, Pasteur would agree.

Aside from their theoretical implications, these researchers had the great practical result of putting bacteriology on a solid footing. It was now clear how precisely careful one had to be to avoid bacterial contamination in the laboratory. We now knew what ‘sterile’ meant and we knew that there could be no such thing as ‘partial sterilization. The discovery of bacteria high in the upper atmosphere, in the mud of the deep-sea bottom, in the waters of hot springs, and in the Arctic glaciers established bacterial ubiquity as almost absolute. It was the revolution in technique alone that made possible modern bacteriology and the subsequent research connecting bacteria to phenomena of human concern, research which today is more prodigious than ever. We are just beginning to understand the relationship of bacteria to certain human diseases, soil chemistry, nutrition, and the phenomenon of antibiosis, wherein a product of one organism (e.g. penicillin) is detrimental to another.

It is not an exaggeration then to say the emergence of the cell theory represents biology’s most significant and fruitful advance. The realization that all plants and animals are composed of cells that are essentially alike, that cells are all formed by the same fundamental division process, that the total organism is a whole made up of the activities and inter-relations of its individual cells, opened up horizons are have not even begun to approach. The cell is a microcosm of life, for in its origin, nature and continuity reside the entire problem of biology.

CAT/1998(RC)

Question. 129

Needham’s theory that ‘God did not create living things directly’ was posited as

Comprehension

Directions for Questions: Read the passage carefully and answer the given questions accordingly.

From ancient times, men have believed that, under certain peculiar circumstances, life could arise spontaneously: from the ooze of rivers could eel and from the entrails of dead bulls, bees; worms from mud and maggots from dead meat. This belief was held by Aristotle, Newton, and Desecrates, among many others, and apparently the great William Harvey too. The weight of centuries gradually disintegrated men’s beliefs in the spontaneous origin of maggots and mice, but the doctrine of spontaneous generation clung Tenaciously to the question of bacterial origin.

In association with Buffon, the Irish Jesuit priest John Needham declared that he could bring about at will that creation of living microbes in heat-sterilized broths, and presumably, in propitiation, theorized that God did not create living things directly but bade the earth and water to bring them forth. In his Dictionaire Philosophique, Voltaire reflected that it was odd to read of Father Needham’s claim while atheists conversely ‘should deny a Creator yet attribute to themselves the power of creating eels. But worte Thomas Husley, ‘The great tragedy of science-the slaying of a beautiful hypothesis by an ugly fact-which is so constantly being enacted under the eyes of philosophers, was played almost immediately, for the benefit of Buffon and Needham.’

The Italian Abbe Spallanzani did an experiment. He showed that a broth sealed from the air while boiling never develops bacterial growths and hence never decomposes. To Needharm’s objection that Spallanzani had ruined his broths and the air above them by excessive boiling, the Abbe replied by breaking the seals of his flasks. Air rushed in and bacterial growth began but the essential conflict remained. Whatever Spaqllanzani had his followers did to remove seeds and contaminants was regarded by the spontaneous generations as damaging to the ‘vital force’ from whence comes new life.

Thus, doubt remained, and into the controversy came the Titanic figure of Louis Pasteur. Believing that a solution to this problem was essential to the development of his theories concerning the role of bacteria in nature, Pasteur freely acknowledged the possibility that living bacteria very well might be arising anew from inanimate matter. To him, the research problem was largely a technical one: to repeat the work of those who claimed to have observed spontaneous generation but to employ infinite care to discover and exclude every possible concealed portal of bacterial entry. For the one that contended that life did not enter from the outside, the proof had to go to the question of possible contamination, Pasteur worked logically. After prolonged boiling broth would ferment only when the air was admitted to it. Therefore, either air contained a factor necessary for the spontaneous generation of life or viable germs were borne in by the air and seeded in the sterile nutrient broth. Pasteur designed ingenious flasks whose long S-shaped necks could be left open. Air was traped in the sinuous glass tube. Broths boiled in these glass tubes remained sterile. When their necks were snapped to admit ordinary air, bacterial growth would then commence-but not in every case. An occasional flask would remain sterile presumably because the bacterial population of the air is unevenly distributed. The forces of spontaneous generation would not be so erratic. Continuous skepticism drove Pasteur almost to fanatical efforts to control the ingredients of his experiments to destroy the doubts of the most skeptical. He ranged from the mountain air of Montanvert, which he showed to be almost sterile, to those deep, clear wells whose waters had been rendered germ-free by slow filtration through sandy soil. The latter discovery led to the familiar porcelain filters of the bacteriology laboratory. With pores small enough to exclude bacteria, solutions allowed to percolate through them could be reliably sterilized.

The argument raged on and soon spilled beyond the boundaries of science to become a burning religious and philosophical question of the day. For many, Pasteur’s conclusions caused conflict because they seemed simultaneous to support the Biblical account of creation while denying a variety of other philosophical systems.

The public was soon caught up in the crossfire of a vigorous series of public lectures and demonstrations by leading exponents of both views, novelists, clergymen, their adjuncts, and friends. Perhaps the most famous of these evenings in the theatre - was Pasteur’s public with a great debate between Huxley and Bishop Wiberforce for the elegance of rhetoric-was Pasteur’s public lecture at the Sorbonne of 7 April 1864. Having shown his audience the swan-necked flasks containing sterile broths, he concluded, “And, therefore, gentlemen, I could point to that liquid and say to you, I have taken my drop of water from the immensity of creation, and I have taken it full of the elements appropriated to the development of inferior beings. And I wait, I watch, I question it - begging it to recommence for new the beautiful spectacle of the first creation. But it is dumb, dumb since these experiments were begun several years ago; it is dumb because I have kept it from the only thing man does not know how to produce: form the germs that float in the air, from Life, for Life is a germ and a germ is a Life. Never will the doctrine of spontaneous generation recover from the mortal blow of this simple experiment .” And it has not Today these same flasks stand immutable: they are still free of microbial life.

It is an interesting fact that despite the ringing declaration of Pasteur, the issue did not die completely. And although far from healthy, it is not yet dead. In his fascinating biography of Pasteur, Rene Dubos has traced the later developments which saw new technical progress and criticism, and new energetic figures in the breach of the battle such as Bastion, for, and the immortal Tyndall, against, the doctrine of spontaneous generation. There was also new ‘sorrow’ for Pasteur as he read years later, in 1877, the last jottings of the great physiologist Claude Bernard and saw in them the ‘mystical’ suggestion that yeast may arise from grape juice. Even at this late date, Pasteur was stirred to new experiments again to prove to the dead Bernard and his followers the correctness of his position.

It seems to me that spontaneous generation is not a possibility, but a completely reasonable possibility that should never be relinquished from scientific thought. Before men knew of bacteria, they accepted the doctrine of spontaneous generation as the ‘only reasonable alternative’ to a belief in supernatural creation. But today, as we look for satisfaction at the downfall of the spontaneous generation hypothesis, we must not forget that science has rationally concluded that life once did originate on earth by spontaneous generation. If was really Pasteur’s evidence against a spontaneous generation that for the first time brought the whole difficult question of the origin of life before the scientific world. . In the above controversy, what was unreasonable was the parade of men who claimed to have ‘proved’ or who resolutely ‘believed in’ spontaneous generation on the face of proof not that spontaneous generation that for the first time brought the whole difficult question of the origin of life before the scientific world. In the above controversy, what was unreasonable was the parade of men who claimed to have ‘proved’ or who resolutely ‘ believed in’ spontaneous generation on the face of proof not that spontaneous generation cannot occur-but that their work was shot thorugh with experimentaal error. The acceptabel evidence also makes it clear that spontaneous generation, if it does not occur, must obviously be a highley improbable even under present conditions. Logic tells us that science can only prove an event improbable : It can never prove it impossible - and Gamow has appropriately remarked that nobody is really certain what would happen if a hermetically sealed can were opened after a couple of million years. Modern science agrees that it was highly improbable for life to have arisen in the pre-Cambrian seas, but it concluded, nevertheless, that there it did occur. with this, I think, Pasteur would agree.

Aside from their theoretical implications, these researchers had the great practical result of putting bacteriology on a solid footing. It was now clear how precisely careful one had to be to avoid bacterial contamination in the laboratory. We now knew what ‘sterile’ meant and we knew that there could be no such thing as ‘partial sterilization. The discovery of bacteria high in the upper atmosphere, in the mud of the deep-sea bottom, in the waters of hot springs, and in the Arctic glaciers established bacterial ubiquity as almost absolute. It was the revolution in technique alone that made possible modern bacteriology and the subsequent research connecting bacteria to phenomena of human concern, research which today is more prodigious than ever. We are just beginning to understand the relationship of bacteria to certain human diseases, soil chemistry, nutrition, and the phenomenon of antibiosis, wherein a product of one organism (e.g. penicillin) is detrimental to another.

It is not an exaggeration then to say the emergence of the cell theory represents biology’s most significant and fruitful advance. The realization that all plants and animals are composed of cells that are essentially alike, that cells are all formed by the same fundamental division process, that the total organism is a whole made up of the activities and inter-relations of its individual cells, opened up horizons are have not even begun to approach. The cell is a microcosm of life, for in its origin, nature and continuity reside the entire problem of biology.

CAT/1998(RC)

Question. 130

It can be inferred from the passage that

Comprehension

Directions for Questions: Read the passage carefully and answer the given questions accordingly.

From ancient times, men have believed that, under certain peculiar circumstances, life could arise spontaneously: from the ooze of rivers could eel and from the entrails of dead bulls, bees; worms from mud and maggots from dead meat. This belief was held by Aristotle, Newton, and Desecrates, among many others, and apparently the great William Harvey too. The weight of centuries gradually disintegrated men’s beliefs in the spontaneous origin of maggots and mice, but the doctrine of spontaneous generation clung Tenaciously to the question of bacterial origin.

In association with Buffon, the Irish Jesuit priest John Needham declared that he could bring about at will that creation of living microbes in heat-sterilized broths, and presumably, in propitiation, theorized that God did not create living things directly but bade the earth and water to bring them forth. In his Dictionaire Philosophique, Voltaire reflected that it was odd to read of Father Needham’s claim while atheists conversely ‘should deny a Creator yet attribute to themselves the power of creating eels. But worte Thomas Husley, ‘The great tragedy of science-the slaying of a beautiful hypothesis by an ugly fact-which is so constantly being enacted under the eyes of philosophers, was played almost immediately, for the benefit of Buffon and Needham.’

The Italian Abbe Spallanzani did an experiment. He showed that a broth sealed from the air while boiling never develops bacterial growths and hence never decomposes. To Needharm’s objection that Spallanzani had ruined his broths and the air above them by excessive boiling, the Abbe replied by breaking the seals of his flasks. Air rushed in and bacterial growth began but the essential conflict remained. Whatever Spaqllanzani had his followers did to remove seeds and contaminants was regarded by the spontaneous generations as damaging to the ‘vital force’ from whence comes new life.

Thus, doubt remained, and into the controversy came the Titanic figure of Louis Pasteur. Believing that a solution to this problem was essential to the development of his theories concerning the role of bacteria in nature, Pasteur freely acknowledged the possibility that living bacteria very well might be arising anew from inanimate matter. To him, the research problem was largely a technical one: to repeat the work of those who claimed to have observed spontaneous generation but to employ infinite care to discover and exclude every possible concealed portal of bacterial entry. For the one that contended that life did not enter from the outside, the proof had to go to the question of possible contamination, Pasteur worked logically. After prolonged boiling broth would ferment only when the air was admitted to it. Therefore, either air contained a factor necessary for the spontaneous generation of life or viable germs were borne in by the air and seeded in the sterile nutrient broth. Pasteur designed ingenious flasks whose long S-shaped necks could be left open. Air was traped in the sinuous glass tube. Broths boiled in these glass tubes remained sterile. When their necks were snapped to admit ordinary air, bacterial growth would then commence-but not in every case. An occasional flask would remain sterile presumably because the bacterial population of the air is unevenly distributed. The forces of spontaneous generation would not be so erratic. Continuous skepticism drove Pasteur almost to fanatical efforts to control the ingredients of his experiments to destroy the doubts of the most skeptical. He ranged from the mountain air of Montanvert, which he showed to be almost sterile, to those deep, clear wells whose waters had been rendered germ-free by slow filtration through sandy soil. The latter discovery led to the familiar porcelain filters of the bacteriology laboratory. With pores small enough to exclude bacteria, solutions allowed to percolate through them could be reliably sterilized.

The argument raged on and soon spilled beyond the boundaries of science to become a burning religious and philosophical question of the day. For many, Pasteur’s conclusions caused conflict because they seemed simultaneous to support the Biblical account of creation while denying a variety of other philosophical systems.

The public was soon caught up in the crossfire of a vigorous series of public lectures and demonstrations by leading exponents of both views, novelists, clergymen, their adjuncts, and friends. Perhaps the most famous of these evenings in the theatre - was Pasteur’s public with a great debate between Huxley and Bishop Wiberforce for the elegance of rhetoric-was Pasteur’s public lecture at the Sorbonne of 7 April 1864. Having shown his audience the swan-necked flasks containing sterile broths, he concluded, “And, therefore, gentlemen, I could point to that liquid and say to you, I have taken my drop of water from the immensity of creation, and I have taken it full of the elements appropriated to the development of inferior beings. And I wait, I watch, I question it - begging it to recommence for new the beautiful spectacle of the first creation. But it is dumb, dumb since these experiments were begun several years ago; it is dumb because I have kept it from the only thing man does not know how to produce: form the germs that float in the air, from Life, for Life is a germ and a germ is a Life. Never will the doctrine of spontaneous generation recover from the mortal blow of this simple experiment .” And it has not Today these same flasks stand immutable: they are still free of microbial life.

It is an interesting fact that despite the ringing declaration of Pasteur, the issue did not die completely. And although far from healthy, it is not yet dead. In his fascinating biography of Pasteur, Rene Dubos has traced the later developments which saw new technical progress and criticism, and new energetic figures in the breach of the battle such as Bastion, for, and the immortal Tyndall, against, the doctrine of spontaneous generation. There was also new ‘sorrow’ for Pasteur as he read years later, in 1877, the last jottings of the great physiologist Claude Bernard and saw in them the ‘mystical’ suggestion that yeast may arise from grape juice. Even at this late date, Pasteur was stirred to new experiments again to prove to the dead Bernard and his followers the correctness of his position.

It seems to me that spontaneous generation is not a possibility, but a completely reasonable possibility that should never be relinquished from scientific thought. Before men knew of bacteria, they accepted the doctrine of spontaneous generation as the ‘only reasonable alternative’ to a belief in supernatural creation. But today, as we look for satisfaction at the downfall of the spontaneous generation hypothesis, we must not forget that science has rationally concluded that life once did originate on earth by spontaneous generation. If was really Pasteur’s evidence against a spontaneous generation that for the first time brought the whole difficult question of the origin of life before the scientific world. . In the above controversy, what was unreasonable was the parade of men who claimed to have ‘proved’ or who resolutely ‘believed in’ spontaneous generation on the face of proof not that spontaneous generation that for the first time brought the whole difficult question of the origin of life before the scientific world. In the above controversy, what was unreasonable was the parade of men who claimed to have ‘proved’ or who resolutely ‘ believed in’ spontaneous generation on the face of proof not that spontaneous generation cannot occur-but that their work was shot thorugh with experimentaal error. The acceptabel evidence also makes it clear that spontaneous generation, if it does not occur, must obviously be a highley improbable even under present conditions. Logic tells us that science can only prove an event improbable : It can never prove it impossible - and Gamow has appropriately remarked that nobody is really certain what would happen if a hermetically sealed can were opened after a couple of million years. Modern science agrees that it was highly improbable for life to have arisen in the pre-Cambrian seas, but it concluded, nevertheless, that there it did occur. with this, I think, Pasteur would agree.

Aside from their theoretical implications, these researchers had the great practical result of putting bacteriology on a solid footing. It was now clear how precisely careful one had to be to avoid bacterial contamination in the laboratory. We now knew what ‘sterile’ meant and we knew that there could be no such thing as ‘partial sterilization. The discovery of bacteria high in the upper atmosphere, in the mud of the deep-sea bottom, in the waters of hot springs, and in the Arctic glaciers established bacterial ubiquity as almost absolute. It was the revolution in technique alone that made possible modern bacteriology and the subsequent research connecting bacteria to phenomena of human concern, research which today is more prodigious than ever. We are just beginning to understand the relationship of bacteria to certain human diseases, soil chemistry, nutrition, and the phenomenon of antibiosis, wherein a product of one organism (e.g. penicillin) is detrimental to another.

It is not an exaggeration then to say the emergence of the cell theory represents biology’s most significant and fruitful advance. The realization that all plants and animals are composed of cells that are essentially alike, that cells are all formed by the same fundamental division process, that the total organism is a whole made up of the activities and inter-relations of its individual cells, opened up horizons are have not even begun to approach. The cell is a microcosm of life, for in its origin, nature and continuity reside the entire problem of biology.

CAT/1998(RC)

Question. 131

According to the passage

Comprehension

Directions for Questions: Read the passage carefully and answer the given questions accordingly.

From ancient times, men have believed that, under certain peculiar circumstances, life could arise spontaneously: from the ooze of rivers could eel and from the entrails of dead bulls, bees; worms from mud and maggots from dead meat. This belief was held by Aristotle, Newton, and Desecrates, among many others, and apparently the great William Harvey too. The weight of centuries gradually disintegrated men’s beliefs in the spontaneous origin of maggots and mice, but the doctrine of spontaneous generation clung Tenaciously to the question of bacterial origin.

In association with Buffon, the Irish Jesuit priest John Needham declared that he could bring about at will that creation of living microbes in heat-sterilized broths, and presumably, in propitiation, theorized that God did not create living things directly but bade the earth and water to bring them forth. In his Dictionaire Philosophique, Voltaire reflected that it was odd to read of Father Needham’s claim while atheists conversely ‘should deny a Creator yet attribute to themselves the power of creating eels. But worte Thomas Husley, ‘The great tragedy of science-the slaying of a beautiful hypothesis by an ugly fact-which is so constantly being enacted under the eyes of philosophers, was played almost immediately, for the benefit of Buffon and Needham.’

The Italian Abbe Spallanzani did an experiment. He showed that a broth sealed from the air while boiling never develops bacterial growths and hence never decomposes. To Needharm’s objection that Spallanzani had ruined his broths and the air above them by excessive boiling, the Abbe replied by breaking the seals of his flasks. Air rushed in and bacterial growth began but the essential conflict remained. Whatever Spaqllanzani had his followers did to remove seeds and contaminants was regarded by the spontaneous generations as damaging to the ‘vital force’ from whence comes new life.

Thus, doubt remained, and into the controversy came the Titanic figure of Louis Pasteur. Believing that a solution to this problem was essential to the development of his theories concerning the role of bacteria in nature, Pasteur freely acknowledged the possibility that living bacteria very well might be arising anew from inanimate matter. To him, the research problem was largely a technical one: to repeat the work of those who claimed to have observed spontaneous generation but to employ infinite care to discover and exclude every possible concealed portal of bacterial entry. For the one that contended that life did not enter from the outside, the proof had to go to the question of possible contamination, Pasteur worked logically. After prolonged boiling broth would ferment only when the air was admitted to it. Therefore, either air contained a factor necessary for the spontaneous generation of life or viable germs were borne in by the air and seeded in the sterile nutrient broth. Pasteur designed ingenious flasks whose long S-shaped necks could be left open. Air was traped in the sinuous glass tube. Broths boiled in these glass tubes remained sterile. When their necks were snapped to admit ordinary air, bacterial growth would then commence-but not in every case. An occasional flask would remain sterile presumably because the bacterial population of the air is unevenly distributed. The forces of spontaneous generation would not be so erratic. Continuous skepticism drove Pasteur almost to fanatical efforts to control the ingredients of his experiments to destroy the doubts of the most skeptical. He ranged from the mountain air of Montanvert, which he showed to be almost sterile, to those deep, clear wells whose waters had been rendered germ-free by slow filtration through sandy soil. The latter discovery led to the familiar porcelain filters of the bacteriology laboratory. With pores small enough to exclude bacteria, solutions allowed to percolate through them could be reliably sterilized.

The argument raged on and soon spilled beyond the boundaries of science to become a burning religious and philosophical question of the day. For many, Pasteur’s conclusions caused conflict because they seemed simultaneous to support the Biblical account of creation while denying a variety of other philosophical systems.

The public was soon caught up in the crossfire of a vigorous series of public lectures and demonstrations by leading exponents of both views, novelists, clergymen, their adjuncts, and friends. Perhaps the most famous of these evenings in the theatre - was Pasteur’s public with a great debate between Huxley and Bishop Wiberforce for the elegance of rhetoric-was Pasteur’s public lecture at the Sorbonne of 7 April 1864. Having shown his audience the swan-necked flasks containing sterile broths, he concluded, “And, therefore, gentlemen, I could point to that liquid and say to you, I have taken my drop of water from the immensity of creation, and I have taken it full of the elements appropriated to the development of inferior beings. And I wait, I watch, I question it - begging it to recommence for new the beautiful spectacle of the first creation. But it is dumb, dumb since these experiments were begun several years ago; it is dumb because I have kept it from the only thing man does not know how to produce: form the germs that float in the air, from Life, for Life is a germ and a germ is a Life. Never will the doctrine of spontaneous generation recover from the mortal blow of this simple experiment .” And it has not Today these same flasks stand immutable: they are still free of microbial life.

It is an interesting fact that despite the ringing declaration of Pasteur, the issue did not die completely. And although far from healthy, it is not yet dead. In his fascinating biography of Pasteur, Rene Dubos has traced the later developments which saw new technical progress and criticism, and new energetic figures in the breach of the battle such as Bastion, for, and the immortal Tyndall, against, the doctrine of spontaneous generation. There was also new ‘sorrow’ for Pasteur as he read years later, in 1877, the last jottings of the great physiologist Claude Bernard and saw in them the ‘mystical’ suggestion that yeast may arise from grape juice. Even at this late date, Pasteur was stirred to new experiments again to prove to the dead Bernard and his followers the correctness of his position.

It seems to me that spontaneous generation is not a possibility, but a completely reasonable possibility that should never be relinquished from scientific thought. Before men knew of bacteria, they accepted the doctrine of spontaneous generation as the ‘only reasonable alternative’ to a belief in supernatural creation. But today, as we look for satisfaction at the downfall of the spontaneous generation hypothesis, we must not forget that science has rationally concluded that life once did originate on earth by spontaneous generation. If was really Pasteur’s evidence against a spontaneous generation that for the first time brought the whole difficult question of the origin of life before the scientific world. . In the above controversy, what was unreasonable was the parade of men who claimed to have ‘proved’ or who resolutely ‘believed in’ spontaneous generation on the face of proof not that spontaneous generation that for the first time brought the whole difficult question of the origin of life before the scientific world. In the above controversy, what was unreasonable was the parade of men who claimed to have ‘proved’ or who resolutely ‘ believed in’ spontaneous generation on the face of proof not that spontaneous generation cannot occur-but that their work was shot thorugh with experimentaal error. The acceptabel evidence also makes it clear that spontaneous generation, if it does not occur, must obviously be a highley improbable even under present conditions. Logic tells us that science can only prove an event improbable : It can never prove it impossible - and Gamow has appropriately remarked that nobody is really certain what would happen if a hermetically sealed can were opened after a couple of million years. Modern science agrees that it was highly improbable for life to have arisen in the pre-Cambrian seas, but it concluded, nevertheless, that there it did occur. with this, I think, Pasteur would agree.

Aside from their theoretical implications, these researchers had the great practical result of putting bacteriology on a solid footing. It was now clear how precisely careful one had to be to avoid bacterial contamination in the laboratory. We now knew what ‘sterile’ meant and we knew that there could be no such thing as ‘partial sterilization. The discovery of bacteria high in the upper atmosphere, in the mud of the deep-sea bottom, in the waters of hot springs, and in the Arctic glaciers established bacterial ubiquity as almost absolute. It was the revolution in technique alone that made possible modern bacteriology and the subsequent research connecting bacteria to phenomena of human concern, research which today is more prodigious than ever. We are just beginning to understand the relationship of bacteria to certain human diseases, soil chemistry, nutrition, and the phenomenon of antibiosis, wherein a product of one organism (e.g. penicillin) is detrimental to another.

It is not an exaggeration then to say the emergence of the cell theory represents biology’s most significant and fruitful advance. The realization that all plants and animals are composed of cells that are essentially alike, that cells are all formed by the same fundamental division process, that the total organism is a whole made up of the activities and inter-relations of its individual cells, opened up horizons are have not even begun to approach. The cell is a microcosm of life, for in its origin, nature and continuity reside the entire problem of biology.

CAT/1998(RC)

Question. 132

Pasteur began his work on the basis of the contention that

Comprehension

Directions for Questions: Read the passage carefully and answer the given questions accordingly.

From ancient times, men have believed that, under certain peculiar circumstances, life could arise spontaneously: from the ooze of rivers could eel and from the entrails of dead bulls, bees; worms from mud and maggots from dead meat. This belief was held by Aristotle, Newton, and Desecrates, among many others, and apparently the great William Harvey too. The weight of centuries gradually disintegrated men’s beliefs in the spontaneous origin of maggots and mice, but the doctrine of spontaneous generation clung Tenaciously to the question of bacterial origin.

In association with Buffon, the Irish Jesuit priest John Needham declared that he could bring about at will that creation of living microbes in heat-sterilized broths, and presumably, in propitiation, theorized that God did not create living things directly but bade the earth and water to bring them forth. In his Dictionaire Philosophique, Voltaire reflected that it was odd to read of Father Needham’s claim while atheists conversely ‘should deny a Creator yet attribute to themselves the power of creating eels. But worte Thomas Husley, ‘The great tragedy of science-the slaying of a beautiful hypothesis by an ugly fact-which is so constantly being enacted under the eyes of philosophers, was played almost immediately, for the benefit of Buffon and Needham.’

The Italian Abbe Spallanzani did an experiment. He showed that a broth sealed from the air while boiling never develops bacterial growths and hence never decomposes. To Needharm’s objection that Spallanzani had ruined his broths and the air above them by excessive boiling, the Abbe replied by breaking the seals of his flasks. Air rushed in and bacterial growth began but the essential conflict remained. Whatever Spaqllanzani had his followers did to remove seeds and contaminants was regarded by the spontaneous generations as damaging to the ‘vital force’ from whence comes new life.

Thus, doubt remained, and into the controversy came the Titanic figure of Louis Pasteur. Believing that a solution to this problem was essential to the development of his theories concerning the role of bacteria in nature, Pasteur freely acknowledged the possibility that living bacteria very well might be arising anew from inanimate matter. To him, the research problem was largely a technical one: to repeat the work of those who claimed to have observed spontaneous generation but to employ infinite care to discover and exclude every possible concealed portal of bacterial entry. For the one that contended that life did not enter from the outside, the proof had to go to the question of possible contamination, Pasteur worked logically. After prolonged boiling broth would ferment only when the air was admitted to it. Therefore, either air contained a factor necessary for the spontaneous generation of life or viable germs were borne in by the air and seeded in the sterile nutrient broth. Pasteur designed ingenious flasks whose long S-shaped necks could be left open. Air was traped in the sinuous glass tube. Broths boiled in these glass tubes remained sterile. When their necks were snapped to admit ordinary air, bacterial growth would then commence-but not in every case. An occasional flask would remain sterile presumably because the bacterial population of the air is unevenly distributed. The forces of spontaneous generation would not be so erratic. Continuous skepticism drove Pasteur almost to fanatical efforts to control the ingredients of his experiments to destroy the doubts of the most skeptical. He ranged from the mountain air of Montanvert, which he showed to be almost sterile, to those deep, clear wells whose waters had been rendered germ-free by slow filtration through sandy soil. The latter discovery led to the familiar porcelain filters of the bacteriology laboratory. With pores small enough to exclude bacteria, solutions allowed to percolate through them could be reliably sterilized.

The argument raged on and soon spilled beyond the boundaries of science to become a burning religious and philosophical question of the day. For many, Pasteur’s conclusions caused conflict because they seemed simultaneous to support the Biblical account of creation while denying a variety of other philosophical systems.

The public was soon caught up in the crossfire of a vigorous series of public lectures and demonstrations by leading exponents of both views, novelists, clergymen, their adjuncts, and friends. Perhaps the most famous of these evenings in the theatre - was Pasteur’s public with a great debate between Huxley and Bishop Wiberforce for the elegance of rhetoric-was Pasteur’s public lecture at the Sorbonne of 7 April 1864. Having shown his audience the swan-necked flasks containing sterile broths, he concluded, “And, therefore, gentlemen, I could point to that liquid and say to you, I have taken my drop of water from the immensity of creation, and I have taken it full of the elements appropriated to the development of inferior beings. And I wait, I watch, I question it - begging it to recommence for new the beautiful spectacle of the first creation. But it is dumb, dumb since these experiments were begun several years ago; it is dumb because I have kept it from the only thing man does not know how to produce: form the germs that float in the air, from Life, for Life is a germ and a germ is a Life. Never will the doctrine of spontaneous generation recover from the mortal blow of this simple experiment .” And it has not Today these same flasks stand immutable: they are still free of microbial life.

It is an interesting fact that despite the ringing declaration of Pasteur, the issue did not die completely. And although far from healthy, it is not yet dead. In his fascinating biography of Pasteur, Rene Dubos has traced the later developments which saw new technical progress and criticism, and new energetic figures in the breach of the battle such as Bastion, for, and the immortal Tyndall, against, the doctrine of spontaneous generation. There was also new ‘sorrow’ for Pasteur as he read years later, in 1877, the last jottings of the great physiologist Claude Bernard and saw in them the ‘mystical’ suggestion that yeast may arise from grape juice. Even at this late date, Pasteur was stirred to new experiments again to prove to the dead Bernard and his followers the correctness of his position.

It seems to me that spontaneous generation is not a possibility, but a completely reasonable possibility that should never be relinquished from scientific thought. Before men knew of bacteria, they accepted the doctrine of spontaneous generation as the ‘only reasonable alternative’ to a belief in supernatural creation. But today, as we look for satisfaction at the downfall of the spontaneous generation hypothesis, we must not forget that science has rationally concluded that life once did originate on earth by spontaneous generation. If was really Pasteur’s evidence against a spontaneous generation that for the first time brought the whole difficult question of the origin of life before the scientific world. . In the above controversy, what was unreasonable was the parade of men who claimed to have ‘proved’ or who resolutely ‘believed in’ spontaneous generation on the face of proof not that spontaneous generation that for the first time brought the whole difficult question of the origin of life before the scientific world. In the above controversy, what was unreasonable was the parade of men who claimed to have ‘proved’ or who resolutely ‘ believed in’ spontaneous generation on the face of proof not that spontaneous generation cannot occur-but that their work was shot thorugh with experimentaal error. The acceptabel evidence also makes it clear that spontaneous generation, if it does not occur, must obviously be a highley improbable even under present conditions. Logic tells us that science can only prove an event improbable : It can never prove it impossible - and Gamow has appropriately remarked that nobody is really certain what would happen if a hermetically sealed can were opened after a couple of million years. Modern science agrees that it was highly improbable for life to have arisen in the pre-Cambrian seas, but it concluded, nevertheless, that there it did occur. with this, I think, Pasteur would agree.

Aside from their theoretical implications, these researchers had the great practical result of putting bacteriology on a solid footing. It was now clear how precisely careful one had to be to avoid bacterial contamination in the laboratory. We now knew what ‘sterile’ meant and we knew that there could be no such thing as ‘partial sterilization. The discovery of bacteria high in the upper atmosphere, in the mud of the deep-sea bottom, in the waters of hot springs, and in the Arctic glaciers established bacterial ubiquity as almost absolute. It was the revolution in technique alone that made possible modern bacteriology and the subsequent research connecting bacteria to phenomena of human concern, research which today is more prodigious than ever. We are just beginning to understand the relationship of bacteria to certain human diseases, soil chemistry, nutrition, and the phenomenon of antibiosis, wherein a product of one organism (e.g. penicillin) is detrimental to another.

It is not an exaggeration then to say the emergence of the cell theory represents biology’s most significant and fruitful advance. The realization that all plants and animals are composed of cells that are essentially alike, that cells are all formed by the same fundamental division process, that the total organism is a whole made up of the activities and inter-relations of its individual cells, opened up horizons are have not even begun to approach. The cell is a microcosm of life, for in its origin, nature and continuity reside the entire problem of biology.

CAT/1998(RC)

Question. 133

The porcelian filters of the bacteriology laboratories owed their descent to

Comprehension

Directions for Questions: Read the passage carefully and answer the given questions accordingly.

From ancient times, men have believed that, under certain peculiar circumstances, life could arise spontaneously: from the ooze of rivers could eel and from the entrails of dead bulls, bees; worms from mud and maggots from dead meat. This belief was held by Aristotle, Newton, and Desecrates, among many others, and apparently the great William Harvey too. The weight of centuries gradually disintegrated men’s beliefs in the spontaneous origin of maggots and mice, but the doctrine of spontaneous generation clung Tenaciously to the question of bacterial origin.

In association with Buffon, the Irish Jesuit priest John Needham declared that he could bring about at will that creation of living microbes in heat-sterilized broths, and presumably, in propitiation, theorized that God did not create living things directly but bade the earth and water to bring them forth. In his Dictionaire Philosophique, Voltaire reflected that it was odd to read of Father Needham’s claim while atheists conversely ‘should deny a Creator yet attribute to themselves the power of creating eels. But worte Thomas Husley, ‘The great tragedy of science-the slaying of a beautiful hypothesis by an ugly fact-which is so constantly being enacted under the eyes of philosophers, was played almost immediately, for the benefit of Buffon and Needham.’

The Italian Abbe Spallanzani did an experiment. He showed that a broth sealed from the air while boiling never develops bacterial growths and hence never decomposes. To Needharm’s objection that Spallanzani had ruined his broths and the air above them by excessive boiling, the Abbe replied by breaking the seals of his flasks. Air rushed in and bacterial growth began but the essential conflict remained. Whatever Spaqllanzani had his followers did to remove seeds and contaminants was regarded by the spontaneous generations as damaging to the ‘vital force’ from whence comes new life.

Thus, doubt remained, and into the controversy came the Titanic figure of Louis Pasteur. Believing that a solution to this problem was essential to the development of his theories concerning the role of bacteria in nature, Pasteur freely acknowledged the possibility that living bacteria very well might be arising anew from inanimate matter. To him, the research problem was largely a technical one: to repeat the work of those who claimed to have observed spontaneous generation but to employ infinite care to discover and exclude every possible concealed portal of bacterial entry. For the one that contended that life did not enter from the outside, the proof had to go to the question of possible contamination, Pasteur worked logically. After prolonged boiling broth would ferment only when the air was admitted to it. Therefore, either air contained a factor necessary for the spontaneous generation of life or viable germs were borne in by the air and seeded in the sterile nutrient broth. Pasteur designed ingenious flasks whose long S-shaped necks could be left open. Air was traped in the sinuous glass tube. Broths boiled in these glass tubes remained sterile. When their necks were snapped to admit ordinary air, bacterial growth would then commence-but not in every case. An occasional flask would remain sterile presumably because the bacterial population of the air is unevenly distributed. The forces of spontaneous generation would not be so erratic. Continuous skepticism drove Pasteur almost to fanatical efforts to control the ingredients of his experiments to destroy the doubts of the most skeptical. He ranged from the mountain air of Montanvert, which he showed to be almost sterile, to those deep, clear wells whose waters had been rendered germ-free by slow filtration through sandy soil. The latter discovery led to the familiar porcelain filters of the bacteriology laboratory. With pores small enough to exclude bacteria, solutions allowed to percolate through them could be reliably sterilized.

The argument raged on and soon spilled beyond the boundaries of science to become a burning religious and philosophical question of the day. For many, Pasteur’s conclusions caused conflict because they seemed simultaneous to support the Biblical account of creation while denying a variety of other philosophical systems.

The public was soon caught up in the crossfire of a vigorous series of public lectures and demonstrations by leading exponents of both views, novelists, clergymen, their adjuncts, and friends. Perhaps the most famous of these evenings in the theatre - was Pasteur’s public with a great debate between Huxley and Bishop Wiberforce for the elegance of rhetoric-was Pasteur’s public lecture at the Sorbonne of 7 April 1864. Having shown his audience the swan-necked flasks containing sterile broths, he concluded, “And, therefore, gentlemen, I could point to that liquid and say to you, I have taken my drop of water from the immensity of creation, and I have taken it full of the elements appropriated to the development of inferior beings. And I wait, I watch, I question it - begging it to recommence for new the beautiful spectacle of the first creation. But it is dumb, dumb since these experiments were begun several years ago; it is dumb because I have kept it from the only thing man does not know how to produce: form the germs that float in the air, from Life, for Life is a germ and a germ is a Life. Never will the doctrine of spontaneous generation recover from the mortal blow of this simple experiment .” And it has not Today these same flasks stand immutable: they are still free of microbial life.

It is an interesting fact that despite the ringing declaration of Pasteur, the issue did not die completely. And although far from healthy, it is not yet dead. In his fascinating biography of Pasteur, Rene Dubos has traced the later developments which saw new technical progress and criticism, and new energetic figures in the breach of the battle such as Bastion, for, and the immortal Tyndall, against, the doctrine of spontaneous generation. There was also new ‘sorrow’ for Pasteur as he read years later, in 1877, the last jottings of the great physiologist Claude Bernard and saw in them the ‘mystical’ suggestion that yeast may arise from grape juice. Even at this late date, Pasteur was stirred to new experiments again to prove to the dead Bernard and his followers the correctness of his position.

It seems to me that spontaneous generation is not a possibility, but a completely reasonable possibility that should never be relinquished from scientific thought. Before men knew of bacteria, they accepted the doctrine of spontaneous generation as the ‘only reasonable alternative’ to a belief in supernatural creation. But today, as we look for satisfaction at the downfall of the spontaneous generation hypothesis, we must not forget that science has rationally concluded that life once did originate on earth by spontaneous generation. If was really Pasteur’s evidence against a spontaneous generation that for the first time brought the whole difficult question of the origin of life before the scientific world. . In the above controversy, what was unreasonable was the parade of men who claimed to have ‘proved’ or who resolutely ‘believed in’ spontaneous generation on the face of proof not that spontaneous generation that for the first time brought the whole difficult question of the origin of life before the scientific world. In the above controversy, what was unreasonable was the parade of men who claimed to have ‘proved’ or who resolutely ‘ believed in’ spontaneous generation on the face of proof not that spontaneous generation cannot occur-but that their work was shot thorugh with experimentaal error. The acceptabel evidence also makes it clear that spontaneous generation, if it does not occur, must obviously be a highley improbable even under present conditions. Logic tells us that science can only prove an event improbable : It can never prove it impossible - and Gamow has appropriately remarked that nobody is really certain what would happen if a hermetically sealed can were opened after a couple of million years. Modern science agrees that it was highly improbable for life to have arisen in the pre-Cambrian seas, but it concluded, nevertheless, that there it did occur. with this, I think, Pasteur would agree.

Aside from their theoretical implications, these researchers had the great practical result of putting bacteriology on a solid footing. It was now clear how precisely careful one had to be to avoid bacterial contamination in the laboratory. We now knew what ‘sterile’ meant and we knew that there could be no such thing as ‘partial sterilization. The discovery of bacteria high in the upper atmosphere, in the mud of the deep-sea bottom, in the waters of hot springs, and in the Arctic glaciers established bacterial ubiquity as almost absolute. It was the revolution in technique alone that made possible modern bacteriology and the subsequent research connecting bacteria to phenomena of human concern, research which today is more prodigious than ever. We are just beginning to understand the relationship of bacteria to certain human diseases, soil chemistry, nutrition, and the phenomenon of antibiosis, wherein a product of one organism (e.g. penicillin) is detrimental to another.

It is not an exaggeration then to say the emergence of the cell theory represents biology’s most significant and fruitful advance. The realization that all plants and animals are composed of cells that are essentially alike, that cells are all formed by the same fundamental division process, that the total organism is a whole made up of the activities and inter-relations of its individual cells, opened up horizons are have not even begun to approach. The cell is a microcosm of life, for in its origin, nature and continuity reside the entire problem of biology.

CAT/1998(RC)

Question. 134

What according to the passage was Pasteur’s declaration to the world ? (a)  (b)  (c) (d) 

Comprehension

Directions for Questions: Read the passage carefully and answer the given questions accordingly.

From ancient times, men have believed that, under certain peculiar circumstances, life could arise spontaneously: from the ooze of rivers could eel and from the entrails of dead bulls, bees; worms from mud and maggots from dead meat. This belief was held by Aristotle, Newton, and Desecrates, among many others, and apparently the great William Harvey too. The weight of centuries gradually disintegrated men’s beliefs in the spontaneous origin of maggots and mice, but the doctrine of spontaneous generation clung Tenaciously to the question of bacterial origin.

In association with Buffon, the Irish Jesuit priest John Needham declared that he could bring about at will that creation of living microbes in heat-sterilized broths, and presumably, in propitiation, theorized that God did not create living things directly but bade the earth and water to bring them forth. In his Dictionaire Philosophique, Voltaire reflected that it was odd to read of Father Needham’s claim while atheists conversely ‘should deny a Creator yet attribute to themselves the power of creating eels. But worte Thomas Husley, ‘The great tragedy of science-the slaying of a beautiful hypothesis by an ugly fact-which is so constantly being enacted under the eyes of philosophers, was played almost immediately, for the benefit of Buffon and Needham.’

The Italian Abbe Spallanzani did an experiment. He showed that a broth sealed from the air while boiling never develops bacterial growths and hence never decomposes. To Needharm’s objection that Spallanzani had ruined his broths and the air above them by excessive boiling, the Abbe replied by breaking the seals of his flasks. Air rushed in and bacterial growth began but the essential conflict remained. Whatever Spaqllanzani had his followers did to remove seeds and contaminants was regarded by the spontaneous generations as damaging to the ‘vital force’ from whence comes new life.

Thus, doubt remained, and into the controversy came the Titanic figure of Louis Pasteur. Believing that a solution to this problem was essential to the development of his theories concerning the role of bacteria in nature, Pasteur freely acknowledged the possibility that living bacteria very well might be arising anew from inanimate matter. To him, the research problem was largely a technical one: to repeat the work of those who claimed to have observed spontaneous generation but to employ infinite care to discover and exclude every possible concealed portal of bacterial entry. For the one that contended that life did not enter from the outside, the proof had to go to the question of possible contamination, Pasteur worked logically. After prolonged boiling broth would ferment only when the air was admitted to it. Therefore, either air contained a factor necessary for the spontaneous generation of life or viable germs were borne in by the air and seeded in the sterile nutrient broth. Pasteur designed ingenious flasks whose long S-shaped necks could be left open. Air was traped in the sinuous glass tube. Broths boiled in these glass tubes remained sterile. When their necks were snapped to admit ordinary air, bacterial growth would then commence-but not in every case. An occasional flask would remain sterile presumably because the bacterial population of the air is unevenly distributed. The forces of spontaneous generation would not be so erratic. Continuous skepticism drove Pasteur almost to fanatical efforts to control the ingredients of his experiments to destroy the doubts of the most skeptical. He ranged from the mountain air of Montanvert, which he showed to be almost sterile, to those deep, clear wells whose waters had been rendered germ-free by slow filtration through sandy soil. The latter discovery led to the familiar porcelain filters of the bacteriology laboratory. With pores small enough to exclude bacteria, solutions allowed to percolate through them could be reliably sterilized.

The argument raged on and soon spilled beyond the boundaries of science to become a burning religious and philosophical question of the day. For many, Pasteur’s conclusions caused conflict because they seemed simultaneous to support the Biblical account of creation while denying a variety of other philosophical systems.

The public was soon caught up in the crossfire of a vigorous series of public lectures and demonstrations by leading exponents of both views, novelists, clergymen, their adjuncts, and friends. Perhaps the most famous of these evenings in the theatre - was Pasteur’s public with a great debate between Huxley and Bishop Wiberforce for the elegance of rhetoric-was Pasteur’s public lecture at the Sorbonne of 7 April 1864. Having shown his audience the swan-necked flasks containing sterile broths, he concluded, “And, therefore, gentlemen, I could point to that liquid and say to you, I have taken my drop of water from the immensity of creation, and I have taken it full of the elements appropriated to the development of inferior beings. And I wait, I watch, I question it - begging it to recommence for new the beautiful spectacle of the first creation. But it is dumb, dumb since these experiments were begun several years ago; it is dumb because I have kept it from the only thing man does not know how to produce: form the germs that float in the air, from Life, for Life is a germ and a germ is a Life. Never will the doctrine of spontaneous generation recover from the mortal blow of this simple experiment .” And it has not Today these same flasks stand immutable: they are still free of microbial life.

It is an interesting fact that despite the ringing declaration of Pasteur, the issue did not die completely. And although far from healthy, it is not yet dead. In his fascinating biography of Pasteur, Rene Dubos has traced the later developments which saw new technical progress and criticism, and new energetic figures in the breach of the battle such as Bastion, for, and the immortal Tyndall, against, the doctrine of spontaneous generation. There was also new ‘sorrow’ for Pasteur as he read years later, in 1877, the last jottings of the great physiologist Claude Bernard and saw in them the ‘mystical’ suggestion that yeast may arise from grape juice. Even at this late date, Pasteur was stirred to new experiments again to prove to the dead Bernard and his followers the correctness of his position.

It seems to me that spontaneous generation is not a possibility, but a completely reasonable possibility that should never be relinquished from scientific thought. Before men knew of bacteria, they accepted the doctrine of spontaneous generation as the ‘only reasonable alternative’ to a belief in supernatural creation. But today, as we look for satisfaction at the downfall of the spontaneous generation hypothesis, we must not forget that science has rationally concluded that life once did originate on earth by spontaneous generation. If was really Pasteur’s evidence against a spontaneous generation that for the first time brought the whole difficult question of the origin of life before the scientific world. . In the above controversy, what was unreasonable was the parade of men who claimed to have ‘proved’ or who resolutely ‘believed in’ spontaneous generation on the face of proof not that spontaneous generation that for the first time brought the whole difficult question of the origin of life before the scientific world. In the above controversy, what was unreasonable was the parade of men who claimed to have ‘proved’ or who resolutely ‘ believed in’ spontaneous generation on the face of proof not that spontaneous generation cannot occur-but that their work was shot thorugh with experimentaal error. The acceptabel evidence also makes it clear that spontaneous generation, if it does not occur, must obviously be a highley improbable even under present conditions. Logic tells us that science can only prove an event improbable : It can never prove it impossible - and Gamow has appropriately remarked that nobody is really certain what would happen if a hermetically sealed can were opened after a couple of million years. Modern science agrees that it was highly improbable for life to have arisen in the pre-Cambrian seas, but it concluded, nevertheless, that there it did occur. with this, I think, Pasteur would agree.

Aside from their theoretical implications, these researchers had the great practical result of putting bacteriology on a solid footing. It was now clear how precisely careful one had to be to avoid bacterial contamination in the laboratory. We now knew what ‘sterile’ meant and we knew that there could be no such thing as ‘partial sterilization. The discovery of bacteria high in the upper atmosphere, in the mud of the deep-sea bottom, in the waters of hot springs, and in the Arctic glaciers established bacterial ubiquity as almost absolute. It was the revolution in technique alone that made possible modern bacteriology and the subsequent research connecting bacteria to phenomena of human concern, research which today is more prodigious than ever. We are just beginning to understand the relationship of bacteria to certain human diseases, soil chemistry, nutrition, and the phenomenon of antibiosis, wherein a product of one organism (e.g. penicillin) is detrimental to another.

It is not an exaggeration then to say the emergence of the cell theory represents biology’s most significant and fruitful advance. The realization that all plants and animals are composed of cells that are essentially alike, that cells are all formed by the same fundamental division process, that the total organism is a whole made up of the activities and inter-relations of its individual cells, opened up horizons are have not even begun to approach. The cell is a microcosm of life, for in its origin, nature and continuity reside the entire problem of biology.

CAT/1998(RC)

Question. 135

What according to the writer, was the problem with the proponents of Spontaneous generation?

Comprehension

Directions for Questions: Read the passage carefully and answer the given questions accordingly.

From ancient times, men have believed that, under certain peculiar circumstances, life could arise spontaneously: from the ooze of rivers could eel and from the entrails of dead bulls, bees; worms from mud and maggots from dead meat. This belief was held by Aristotle, Newton, and Desecrates, among many others, and apparently the great William Harvey too. The weight of centuries gradually disintegrated men’s beliefs in the spontaneous origin of maggots and mice, but the doctrine of spontaneous generation clung Tenaciously to the question of bacterial origin.

In association with Buffon, the Irish Jesuit priest John Needham declared that he could bring about at will that creation of living microbes in heat-sterilized broths, and presumably, in propitiation, theorized that God did not create living things directly but bade the earth and water to bring them forth. In his Dictionaire Philosophique, Voltaire reflected that it was odd to read of Father Needham’s claim while atheists conversely ‘should deny a Creator yet attribute to themselves the power of creating eels. But worte Thomas Husley, ‘The great tragedy of science-the slaying of a beautiful hypothesis by an ugly fact-which is so constantly being enacted under the eyes of philosophers, was played almost immediately, for the benefit of Buffon and Needham.’

The Italian Abbe Spallanzani did an experiment. He showed that a broth sealed from the air while boiling never develops bacterial growths and hence never decomposes. To Needharm’s objection that Spallanzani had ruined his broths and the air above them by excessive boiling, the Abbe replied by breaking the seals of his flasks. Air rushed in and bacterial growth began but the essential conflict remained. Whatever Spaqllanzani had his followers did to remove seeds and contaminants was regarded by the spontaneous generations as damaging to the ‘vital force’ from whence comes new life.

Thus, doubt remained, and into the controversy came the Titanic figure of Louis Pasteur. Believing that a solution to this problem was essential to the development of his theories concerning the role of bacteria in nature, Pasteur freely acknowledged the possibility that living bacteria very well might be arising anew from inanimate matter. To him, the research problem was largely a technical one: to repeat the work of those who claimed to have observed spontaneous generation but to employ infinite care to discover and exclude every possible concealed portal of bacterial entry. For the one that contended that life did not enter from the outside, the proof had to go to the question of possible contamination, Pasteur worked logically. After prolonged boiling broth would ferment only when the air was admitted to it. Therefore, either air contained a factor necessary for the spontaneous generation of life or viable germs were borne in by the air and seeded in the sterile nutrient broth. Pasteur designed ingenious flasks whose long S-shaped necks could be left open. Air was traped in the sinuous glass tube. Broths boiled in these glass tubes remained sterile. When their necks were snapped to admit ordinary air, bacterial growth would then commence-but not in every case. An occasional flask would remain sterile presumably because the bacterial population of the air is unevenly distributed. The forces of spontaneous generation would not be so erratic. Continuous skepticism drove Pasteur almost to fanatical efforts to control the ingredients of his experiments to destroy the doubts of the most skeptical. He ranged from the mountain air of Montanvert, which he showed to be almost sterile, to those deep, clear wells whose waters had been rendered germ-free by slow filtration through sandy soil. The latter discovery led to the familiar porcelain filters of the bacteriology laboratory. With pores small enough to exclude bacteria, solutions allowed to percolate through them could be reliably sterilized.

The argument raged on and soon spilled beyond the boundaries of science to become a burning religious and philosophical question of the day. For many, Pasteur’s conclusions caused conflict because they seemed simultaneous to support the Biblical account of creation while denying a variety of other philosophical systems.

The public was soon caught up in the crossfire of a vigorous series of public lectures and demonstrations by leading exponents of both views, novelists, clergymen, their adjuncts, and friends. Perhaps the most famous of these evenings in the theatre - was Pasteur’s public with a great debate between Huxley and Bishop Wiberforce for the elegance of rhetoric-was Pasteur’s public lecture at the Sorbonne of 7 April 1864. Having shown his audience the swan-necked flasks containing sterile broths, he concluded, “And, therefore, gentlemen, I could point to that liquid and say to you, I have taken my drop of water from the immensity of creation, and I have taken it full of the elements appropriated to the development of inferior beings. And I wait, I watch, I question it - begging it to recommence for new the beautiful spectacle of the first creation. But it is dumb, dumb since these experiments were begun several years ago; it is dumb because I have kept it from the only thing man does not know how to produce: form the germs that float in the air, from Life, for Life is a germ and a germ is a Life. Never will the doctrine of spontaneous generation recover from the mortal blow of this simple experiment .” And it has not Today these same flasks stand immutable: they are still free of microbial life.

It is an interesting fact that despite the ringing declaration of Pasteur, the issue did not die completely. And although far from healthy, it is not yet dead. In his fascinating biography of Pasteur, Rene Dubos has traced the later developments which saw new technical progress and criticism, and new energetic figures in the breach of the battle such as Bastion, for, and the immortal Tyndall, against, the doctrine of spontaneous generation. There was also new ‘sorrow’ for Pasteur as he read years later, in 1877, the last jottings of the great physiologist Claude Bernard and saw in them the ‘mystical’ suggestion that yeast may arise from grape juice. Even at this late date, Pasteur was stirred to new experiments again to prove to the dead Bernard and his followers the correctness of his position.

It seems to me that spontaneous generation is not a possibility, but a completely reasonable possibility that should never be relinquished from scientific thought. Before men knew of bacteria, they accepted the doctrine of spontaneous generation as the ‘only reasonable alternative’ to a belief in supernatural creation. But today, as we look for satisfaction at the downfall of the spontaneous generation hypothesis, we must not forget that science has rationally concluded that life once did originate on earth by spontaneous generation. If was really Pasteur’s evidence against a spontaneous generation that for the first time brought the whole difficult question of the origin of life before the scientific world. . In the above controversy, what was unreasonable was the parade of men who claimed to have ‘proved’ or who resolutely ‘believed in’ spontaneous generation on the face of proof not that spontaneous generation that for the first time brought the whole difficult question of the origin of life before the scientific world. In the above controversy, what was unreasonable was the parade of men who claimed to have ‘proved’ or who resolutely ‘ believed in’ spontaneous generation on the face of proof not that spontaneous generation cannot occur-but that their work was shot thorugh with experimentaal error. The acceptabel evidence also makes it clear that spontaneous generation, if it does not occur, must obviously be a highley improbable even under present conditions. Logic tells us that science can only prove an event improbable : It can never prove it impossible - and Gamow has appropriately remarked that nobody is really certain what would happen if a hermetically sealed can were opened after a couple of million years. Modern science agrees that it was highly improbable for life to have arisen in the pre-Cambrian seas, but it concluded, nevertheless, that there it did occur. with this, I think, Pasteur would agree.

Aside from their theoretical implications, these researchers had the great practical result of putting bacteriology on a solid footing. It was now clear how precisely careful one had to be to avoid bacterial contamination in the laboratory. We now knew what ‘sterile’ meant and we knew that there could be no such thing as ‘partial sterilization. The discovery of bacteria high in the upper atmosphere, in the mud of the deep-sea bottom, in the waters of hot springs, and in the Arctic glaciers established bacterial ubiquity as almost absolute. It was the revolution in technique alone that made possible modern bacteriology and the subsequent research connecting bacteria to phenomena of human concern, research which today is more prodigious than ever. We are just beginning to understand the relationship of bacteria to certain human diseases, soil chemistry, nutrition, and the phenomenon of antibiosis, wherein a product of one organism (e.g. penicillin) is detrimental to another.

It is not an exaggeration then to say the emergence of the cell theory represents biology’s most significant and fruitful advance. The realization that all plants and animals are composed of cells that are essentially alike, that cells are all formed by the same fundamental division process, that the total organism is a whole made up of the activities and inter-relations of its individual cells, opened up horizons are have not even begun to approach. The cell is a microcosm of life, for in its origin, nature and continuity reside the entire problem of biology.

CAT/1998(RC)

Question. 136

One of the results of the theoretical crossfire regarding bacteriology was that

Comprehension

Directions for Questions: Read the passage carefully and answer the given questions accordingly.

From ancient times, men have believed that, under certain peculiar circumstances, life could arise spontaneously: from the ooze of rivers could eel and from the entrails of dead bulls, bees; worms from mud and maggots from dead meat. This belief was held by Aristotle, Newton, and Desecrates, among many others, and apparently the great William Harvey too. The weight of centuries gradually disintegrated men’s beliefs in the spontaneous origin of maggots and mice, but the doctrine of spontaneous generation clung Tenaciously to the question of bacterial origin.

In association with Buffon, the Irish Jesuit priest John Needham declared that he could bring about at will that creation of living microbes in heat-sterilized broths, and presumably, in propitiation, theorized that God did not create living things directly but bade the earth and water to bring them forth. In his Dictionaire Philosophique, Voltaire reflected that it was odd to read of Father Needham’s claim while atheists conversely ‘should deny a Creator yet attribute to themselves the power of creating eels. But worte Thomas Husley, ‘The great tragedy of science-the slaying of a beautiful hypothesis by an ugly fact-which is so constantly being enacted under the eyes of philosophers, was played almost immediately, for the benefit of Buffon and Needham.’

The Italian Abbe Spallanzani did an experiment. He showed that a broth sealed from the air while boiling never develops bacterial growths and hence never decomposes. To Needharm’s objection that Spallanzani had ruined his broths and the air above them by excessive boiling, the Abbe replied by breaking the seals of his flasks. Air rushed in and bacterial growth began but the essential conflict remained. Whatever Spaqllanzani had his followers did to remove seeds and contaminants was regarded by the spontaneous generations as damaging to the ‘vital force’ from whence comes new life.

Thus, doubt remained, and into the controversy came the Titanic figure of Louis Pasteur. Believing that a solution to this problem was essential to the development of his theories concerning the role of bacteria in nature, Pasteur freely acknowledged the possibility that living bacteria very well might be arising anew from inanimate matter. To him, the research problem was largely a technical one: to repeat the work of those who claimed to have observed spontaneous generation but to employ infinite care to discover and exclude every possible concealed portal of bacterial entry. For the one that contended that life did not enter from the outside, the proof had to go to the question of possible contamination, Pasteur worked logically. After prolonged boiling broth would ferment only when the air was admitted to it. Therefore, either air contained a factor necessary for the spontaneous generation of life or viable germs were borne in by the air and seeded in the sterile nutrient broth. Pasteur designed ingenious flasks whose long S-shaped necks could be left open. Air was traped in the sinuous glass tube. Broths boiled in these glass tubes remained sterile. When their necks were snapped to admit ordinary air, bacterial growth would then commence-but not in every case. An occasional flask would remain sterile presumably because the bacterial population of the air is unevenly distributed. The forces of spontaneous generation would not be so erratic. Continuous skepticism drove Pasteur almost to fanatical efforts to control the ingredients of his experiments to destroy the doubts of the most skeptical. He ranged from the mountain air of Montanvert, which he showed to be almost sterile, to those deep, clear wells whose waters had been rendered germ-free by slow filtration through sandy soil. The latter discovery led to the familiar porcelain filters of the bacteriology laboratory. With pores small enough to exclude bacteria, solutions allowed to percolate through them could be reliably sterilized.

The argument raged on and soon spilled beyond the boundaries of science to become a burning religious and philosophical question of the day. For many, Pasteur’s conclusions caused conflict because they seemed simultaneous to support the Biblical account of creation while denying a variety of other philosophical systems.

The public was soon caught up in the crossfire of a vigorous series of public lectures and demonstrations by leading exponents of both views, novelists, clergymen, their adjuncts, and friends. Perhaps the most famous of these evenings in the theatre - was Pasteur’s public with a great debate between Huxley and Bishop Wiberforce for the elegance of rhetoric-was Pasteur’s public lecture at the Sorbonne of 7 April 1864. Having shown his audience the swan-necked flasks containing sterile broths, he concluded, “And, therefore, gentlemen, I could point to that liquid and say to you, I have taken my drop of water from the immensity of creation, and I have taken it full of the elements appropriated to the development of inferior beings. And I wait, I watch, I question it - begging it to recommence for new the beautiful spectacle of the first creation. But it is dumb, dumb since these experiments were begun several years ago; it is dumb because I have kept it from the only thing man does not know how to produce: form the germs that float in the air, from Life, for Life is a germ and a germ is a Life. Never will the doctrine of spontaneous generation recover from the mortal blow of this simple experiment .” And it has not Today these same flasks stand immutable: they are still free of microbial life.

It is an interesting fact that despite the ringing declaration of Pasteur, the issue did not die completely. And although far from healthy, it is not yet dead. In his fascinating biography of Pasteur, Rene Dubos has traced the later developments which saw new technical progress and criticism, and new energetic figures in the breach of the battle such as Bastion, for, and the immortal Tyndall, against, the doctrine of spontaneous generation. There was also new ‘sorrow’ for Pasteur as he read years later, in 1877, the last jottings of the great physiologist Claude Bernard and saw in them the ‘mystical’ suggestion that yeast may arise from grape juice. Even at this late date, Pasteur was stirred to new experiments again to prove to the dead Bernard and his followers the correctness of his position.

It seems to me that spontaneous generation is not a possibility, but a completely reasonable possibility that should never be relinquished from scientific thought. Before men knew of bacteria, they accepted the doctrine of spontaneous generation as the ‘only reasonable alternative’ to a belief in supernatural creation. But today, as we look for satisfaction at the downfall of the spontaneous generation hypothesis, we must not forget that science has rationally concluded that life once did originate on earth by spontaneous generation. If was really Pasteur’s evidence against a spontaneous generation that for the first time brought the whole difficult question of the origin of life before the scientific world. . In the above controversy, what was unreasonable was the parade of men who claimed to have ‘proved’ or who resolutely ‘believed in’ spontaneous generation on the face of proof not that spontaneous generation that for the first time brought the whole difficult question of the origin of life before the scientific world. In the above controversy, what was unreasonable was the parade of men who claimed to have ‘proved’ or who resolutely ‘ believed in’ spontaneous generation on the face of proof not that spontaneous generation cannot occur-but that their work was shot thorugh with experimentaal error. The acceptabel evidence also makes it clear that spontaneous generation, if it does not occur, must obviously be a highley improbable even under present conditions. Logic tells us that science can only prove an event improbable : It can never prove it impossible - and Gamow has appropriately remarked that nobody is really certain what would happen if a hermetically sealed can were opened after a couple of million years. Modern science agrees that it was highly improbable for life to have arisen in the pre-Cambrian seas, but it concluded, nevertheless, that there it did occur. with this, I think, Pasteur would agree.

Aside from their theoretical implications, these researchers had the great practical result of putting bacteriology on a solid footing. It was now clear how precisely careful one had to be to avoid bacterial contamination in the laboratory. We now knew what ‘sterile’ meant and we knew that there could be no such thing as ‘partial sterilization. The discovery of bacteria high in the upper atmosphere, in the mud of the deep-sea bottom, in the waters of hot springs, and in the Arctic glaciers established bacterial ubiquity as almost absolute. It was the revolution in technique alone that made possible modern bacteriology and the subsequent research connecting bacteria to phenomena of human concern, research which today is more prodigious than ever. We are just beginning to understand the relationship of bacteria to certain human diseases, soil chemistry, nutrition, and the phenomenon of antibiosis, wherein a product of one organism (e.g. penicillin) is detrimental to another.

It is not an exaggeration then to say the emergence of the cell theory represents biology’s most significant and fruitful advance. The realization that all plants and animals are composed of cells that are essentially alike, that cells are all formed by the same fundamental division process, that the total organism is a whole made up of the activities and inter-relations of its individual cells, opened up horizons are have not even begun to approach. The cell is a microcosm of life, for in its origin, nature and continuity reside the entire problem of biology.

CAT/1998(RC)

Question. 137

One of the reasons for the confect caused by Pasteur’s experiments was that

Comprehension

Directions for Questions: Read the passage carefully and answer the given questions accordingly.

From ancient times, men have believed that, under certain peculiar circumstances, life could arise spontaneously: from the ooze of rivers could eel and from the entrails of dead bulls, bees; worms from mud and maggots from dead meat. This belief was held by Aristotle, Newton, and Desecrates, among many others, and apparently the great William Harvey too. The weight of centuries gradually disintegrated men’s beliefs in the spontaneous origin of maggots and mice, but the doctrine of spontaneous generation clung Tenaciously to the question of bacterial origin.

In association with Buffon, the Irish Jesuit priest John Needham declared that he could bring about at will that creation of living microbes in heat-sterilized broths, and presumably, in propitiation, theorized that God did not create living things directly but bade the earth and water to bring them forth. In his Dictionaire Philosophique, Voltaire reflected that it was odd to read of Father Needham’s claim while atheists conversely ‘should deny a Creator yet attribute to themselves the power of creating eels. But worte Thomas Husley, ‘The great tragedy of science-the slaying of a beautiful hypothesis by an ugly fact-which is so constantly being enacted under the eyes of philosophers, was played almost immediately, for the benefit of Buffon and Needham.’

The Italian Abbe Spallanzani did an experiment. He showed that a broth sealed from the air while boiling never develops bacterial growths and hence never decomposes. To Needharm’s objection that Spallanzani had ruined his broths and the air above them by excessive boiling, the Abbe replied by breaking the seals of his flasks. Air rushed in and bacterial growth began but the essential conflict remained. Whatever Spaqllanzani had his followers did to remove seeds and contaminants was regarded by the spontaneous generations as damaging to the ‘vital force’ from whence comes new life.

Thus, doubt remained, and into the controversy came the Titanic figure of Louis Pasteur. Believing that a solution to this problem was essential to the development of his theories concerning the role of bacteria in nature, Pasteur freely acknowledged the possibility that living bacteria very well might be arising anew from inanimate matter. To him, the research problem was largely a technical one: to repeat the work of those who claimed to have observed spontaneous generation but to employ infinite care to discover and exclude every possible concealed portal of bacterial entry. For the one that contended that life did not enter from the outside, the proof had to go to the question of possible contamination, Pasteur worked logically. After prolonged boiling broth would ferment only when the air was admitted to it. Therefore, either air contained a factor necessary for the spontaneous generation of life or viable germs were borne in by the air and seeded in the sterile nutrient broth. Pasteur designed ingenious flasks whose long S-shaped necks could be left open. Air was traped in the sinuous glass tube. Broths boiled in these glass tubes remained sterile. When their necks were snapped to admit ordinary air, bacterial growth would then commence-but not in every case. An occasional flask would remain sterile presumably because the bacterial population of the air is unevenly distributed. The forces of spontaneous generation would not be so erratic. Continuous skepticism drove Pasteur almost to fanatical efforts to control the ingredients of his experiments to destroy the doubts of the most skeptical. He ranged from the mountain air of Montanvert, which he showed to be almost sterile, to those deep, clear wells whose waters had been rendered germ-free by slow filtration through sandy soil. The latter discovery led to the familiar porcelain filters of the bacteriology laboratory. With pores small enough to exclude bacteria, solutions allowed to percolate through them could be reliably sterilized.

The argument raged on and soon spilled beyond the boundaries of science to become a burning religious and philosophical question of the day. For many, Pasteur’s conclusions caused conflict because they seemed simultaneous to support the Biblical account of creation while denying a variety of other philosophical systems.

The public was soon caught up in the crossfire of a vigorous series of public lectures and demonstrations by leading exponents of both views, novelists, clergymen, their adjuncts, and friends. Perhaps the most famous of these evenings in the theatre - was Pasteur’s public with a great debate between Huxley and Bishop Wiberforce for the elegance of rhetoric-was Pasteur’s public lecture at the Sorbonne of 7 April 1864. Having shown his audience the swan-necked flasks containing sterile broths, he concluded, “And, therefore, gentlemen, I could point to that liquid and say to you, I have taken my drop of water from the immensity of creation, and I have taken it full of the elements appropriated to the development of inferior beings. And I wait, I watch, I question it - begging it to recommence for new the beautiful spectacle of the first creation. But it is dumb, dumb since these experiments were begun several years ago; it is dumb because I have kept it from the only thing man does not know how to produce: form the germs that float in the air, from Life, for Life is a germ and a germ is a Life. Never will the doctrine of spontaneous generation recover from the mortal blow of this simple experiment .” And it has not Today these same flasks stand immutable: they are still free of microbial life.

It is an interesting fact that despite the ringing declaration of Pasteur, the issue did not die completely. And although far from healthy, it is not yet dead. In his fascinating biography of Pasteur, Rene Dubos has traced the later developments which saw new technical progress and criticism, and new energetic figures in the breach of the battle such as Bastion, for, and the immortal Tyndall, against, the doctrine of spontaneous generation. There was also new ‘sorrow’ for Pasteur as he read years later, in 1877, the last jottings of the great physiologist Claude Bernard and saw in them the ‘mystical’ suggestion that yeast may arise from grape juice. Even at this late date, Pasteur was stirred to new experiments again to prove to the dead Bernard and his followers the correctness of his position.

It seems to me that spontaneous generation is not a possibility, but a completely reasonable possibility that should never be relinquished from scientific thought. Before men knew of bacteria, they accepted the doctrine of spontaneous generation as the ‘only reasonable alternative’ to a belief in supernatural creation. But today, as we look for satisfaction at the downfall of the spontaneous generation hypothesis, we must not forget that science has rationally concluded that life once did originate on earth by spontaneous generation. If was really Pasteur’s evidence against a spontaneous generation that for the first time brought the whole difficult question of the origin of life before the scientific world. . In the above controversy, what was unreasonable was the parade of men who claimed to have ‘proved’ or who resolutely ‘believed in’ spontaneous generation on the face of proof not that spontaneous generation that for the first time brought the whole difficult question of the origin of life before the scientific world. In the above controversy, what was unreasonable was the parade of men who claimed to have ‘proved’ or who resolutely ‘ believed in’ spontaneous generation on the face of proof not that spontaneous generation cannot occur-but that their work was shot thorugh with experimentaal error. The acceptabel evidence also makes it clear that spontaneous generation, if it does not occur, must obviously be a highley improbable even under present conditions. Logic tells us that science can only prove an event improbable : It can never prove it impossible - and Gamow has appropriately remarked that nobody is really certain what would happen if a hermetically sealed can were opened after a couple of million years. Modern science agrees that it was highly improbable for life to have arisen in the pre-Cambrian seas, but it concluded, nevertheless, that there it did occur. with this, I think, Pasteur would agree.

Aside from their theoretical implications, these researchers had the great practical result of putting bacteriology on a solid footing. It was now clear how precisely careful one had to be to avoid bacterial contamination in the laboratory. We now knew what ‘sterile’ meant and we knew that there could be no such thing as ‘partial sterilization. The discovery of bacteria high in the upper atmosphere, in the mud of the deep-sea bottom, in the waters of hot springs, and in the Arctic glaciers established bacterial ubiquity as almost absolute. It was the revolution in technique alone that made possible modern bacteriology and the subsequent research connecting bacteria to phenomena of human concern, research which today is more prodigious than ever. We are just beginning to understand the relationship of bacteria to certain human diseases, soil chemistry, nutrition, and the phenomenon of antibiosis, wherein a product of one organism (e.g. penicillin) is detrimental to another.

It is not an exaggeration then to say the emergence of the cell theory represents biology’s most significant and fruitful advance. The realization that all plants and animals are composed of cells that are essentially alike, that cells are all formed by the same fundamental division process, that the total organism is a whole made up of the activities and inter-relations of its individual cells, opened up horizons are have not even begun to approach. The cell is a microcosm of life, for in its origin, nature and continuity reside the entire problem of biology.

CAT/1998(RC)

Question. 138

According to the author

Comprehension

Directions for Questions: Read the passage carefully and answer the given questions accordingly.

That the doctrines connected with the name of Mr. Darwin are altering our principles has become a sort of commonplace thing to say. And moral principles are said to share in this general transformation. Now, to pass by other subjects, I do not see why Darwinism need change our utimate moral ideas. It will not modify our conception of the end, either for the community of the individual, unless we have been holding views which long before Darwin were out of date. As to the principles of ethics I perceive, in short, no sign of revolution. Darwinism has indeed helped many to truer conception of the end, but I cannot admit that it has either originated or modified that conception.

And yet in ethics Darwinism after all may perhaps be revolutionary. I may lead not to another view about the end, but to a different way regarding the relatively importance of the means. For in the ordinary moral creed those means seem estimated on no rational principle. Our creed appears rather to be an irrational mixture of jarring elements. We have the moral code of Christianity, accepted in part; rejected practically by all save a few fanatics. But we do not realise how in its very principle the Christian ideals is false. And when we reject this code for another and in part a sounder morality, we are in the same condition of blindness and of practical confusion. It is here that Darwinism, with all the tendencies we may group under that name, seems destined to intervene. It will make itself felt, I believe, more and more effectually. It may force on us in some points a correction of our moral views, and a return to a non-Christian and perhaps a Hellenic ideal. I propose to illustrate here these general statements by some remarks on Punishment.

Darwinism, I have said, has not even modified our ideas of the Chief Good. We may take that as - the welfare of the community realised in its members. There is, of course, a question s to meaning to be given to welfare. We may identify that with mere pleasure, or gain with mere system, or may rather view both as inseparable aspects of perfection and individuality. And the extent and nature of the community would once more be a subject for some discussion. But we are forced to enter on these controversies here. We may leave welfare undefined, and for present purpose need not distinguish the community form the state. The welfare of this whole exists, or course, nowhere outside the individuals, and the individuals again have rights and duties only as members in the whole. This is the revived Hellenism — or we may call it the organic view of things — urged by German idealism early in the present century

CAT/1996(RC)

Question. 139

According to the author, the doctrines of Mr. Darwin......

Comprehension

Directions for Questions: Read the passage carefully and answer the given questions accordingly.

That the doctrines connected with the name of Mr. Darwin are altering our principles has become a sort of commonplace thing to say. And moral principles are said to share in this general transformation. Now, to pass by other subjects, I do not see why Darwinism need change our utimate moral ideas. It will not modify our conception of the end, either for the community of the individual, unless we have been holding views which long before Darwin were out of date. As to the principles of ethics I perceive, in short, no sign of revolution. Darwinism has indeed helped many to truer conception of the end, but I cannot admit that it has either originated or modified that conception.

And yet in ethics Darwinism after all may perhaps be revolutionary. I may lead not to another view about the end, but to a different way regarding the relatively importance of the means. For in the ordinary moral creed those means seem estimated on no rational principle. Our creed appears rather to be an irrational mixture of jarring elements. We have the moral code of Christianity, accepted in part; rejected practically by all save a few fanatics. But we do not realise how in its very principle the Christian ideals is false. And when we reject this code for another and in part a sounder morality, we are in the same condition of blindness and of practical confusion. It is here that Darwinism, with all the tendencies we may group under that name, seems destined to intervene. It will make itself felt, I believe, more and more effectually. It may force on us in some points a correction of our moral views, and a return to a non-Christian and perhaps a Hellenic ideal. I propose to illustrate here these general statements by some remarks on Punishment.

Darwinism, I have said, has not even modified our ideas of the Chief Good. We may take that as - the welfare of the community realised in its members. There is, of course, a question s to meaning to be given to welfare. We may identify that with mere pleasure, or gain with mere system, or may rather view both as inseparable aspects of perfection and individuality. And the extent and nature of the community would once more be a subject for some discussion. But we are forced to enter on these controversies here. We may leave welfare undefined, and for present purpose need not distinguish the community form the state. The welfare of this whole exists, or course, nowhere outside the individuals, and the individuals again have rights and duties only as members in the whole. This is the revived Hellenism — or we may call it the organic view of things — urged by German idealism early in the present century

CAT/1996(RC)

Question. 140

What is most probably the author’s opinion of the existing moral principles of the people?

Comprehension

Directions for Questions: Read the passage carefully and answer the given questions accordingly.

That the doctrines connected with the name of Mr. Darwin are altering our principles has become a sort of commonplace thing to say. And moral principles are said to share in this general transformation. Now, to pass by other subjects, I do not see why Darwinism need change our utimate moral ideas. It will not modify our conception of the end, either for the community of the individual, unless we have been holding views which long before Darwin were out of date. As to the principles of ethics I perceive, in short, no sign of revolution. Darwinism has indeed helped many to truer conception of the end, but I cannot admit that it has either originated or modified that conception.

And yet in ethics Darwinism after all may perhaps be revolutionary. I may lead not to another view about the end, but to a different way regarding the relatively importance of the means. For in the ordinary moral creed those means seem estimated on no rational principle. Our creed appears rather to be an irrational mixture of jarring elements. We have the moral code of Christianity, accepted in part; rejected practically by all save a few fanatics. But we do not realise how in its very principle the Christian ideals is false. And when we reject this code for another and in part a sounder morality, we are in the same condition of blindness and of practical confusion. It is here that Darwinism, with all the tendencies we may group under that name, seems destined to intervene. It will make itself felt, I believe, more and more effectually. It may force on us in some points a correction of our moral views, and a return to a non-Christian and perhaps a Hellenic ideal. I propose to illustrate here these general statements by some remarks on Punishment.

Darwinism, I have said, has not even modified our ideas of the Chief Good. We may take that as - the welfare of the community realised in its members. There is, of course, a question s to meaning to be given to welfare. We may identify that with mere pleasure, or gain with mere system, or may rather view both as inseparable aspects of perfection and individuality. And the extent and nature of the community would once more be a subject for some discussion. But we are forced to enter on these controversies here. We may leave welfare undefined, and for present purpose need not distinguish the community form the state. The welfare of this whole exists, or course, nowhere outside the individuals, and the individuals again have rights and duties only as members in the whole. This is the revived Hellenism — or we may call it the organic view of things — urged by German idealism early in the present century

CAT/1996(RC)

Question. 141

According to the author, the moral code of Christianity.......

Comprehension

Directions for Questions: Read the passage carefully and answer the given questions accordingly.

That the doctrines connected with the name of Mr. Darwin are altering our principles has become a sort of commonplace thing to say. And moral principles are said to share in this general transformation. Now, to pass by other subjects, I do not see why Darwinism need change our utimate moral ideas. It will not modify our conception of the end, either for the community of the individual, unless we have been holding views which long before Darwin were out of date. As to the principles of ethics I perceive, in short, no sign of revolution. Darwinism has indeed helped many to truer conception of the end, but I cannot admit that it has either originated or modified that conception.

And yet in ethics Darwinism after all may perhaps be revolutionary. I may lead not to another view about the end, but to a different way regarding the relatively importance of the means. For in the ordinary moral creed those means seem estimated on no rational principle. Our creed appears rather to be an irrational mixture of jarring elements. We have the moral code of Christianity, accepted in part; rejected practically by all save a few fanatics. But we do not realise how in its very principle the Christian ideals is false. And when we reject this code for another and in part a sounder morality, we are in the same condition of blindness and of practical confusion. It is here that Darwinism, with all the tendencies we may group under that name, seems destined to intervene. It will make itself felt, I believe, more and more effectually. It may force on us in some points a correction of our moral views, and a return to a non-Christian and perhaps a Hellenic ideal. I propose to illustrate here these general statements by some remarks on Punishment.

Darwinism, I have said, has not even modified our ideas of the Chief Good. We may take that as - the welfare of the community realised in its members. There is, of course, a question s to meaning to be given to welfare. We may identify that with mere pleasure, or gain with mere system, or may rather view both as inseparable aspects of perfection and individuality. And the extent and nature of the community would once more be a subject for some discussion. But we are forced to enter on these controversies here. We may leave welfare undefined, and for present purpose need not distinguish the community form the state. The welfare of this whole exists, or course, nowhere outside the individuals, and the individuals again have rights and duties only as members in the whole. This is the revived Hellenism — or we may call it the organic view of things — urged by German idealism early in the present century

CAT/1996(RC)

Question. 142

It is implied in the passage that...........

Comprehension

Directions for Questions: Read the passage carefully and answer the given questions accordingly.

That the doctrines connected with the name of Mr. Darwin are altering our principles has become a sort of commonplace thing to say. And moral principles are said to share in this general transformation. Now, to pass by other subjects, I do not see why Darwinism need change our utimate moral ideas. It will not modify our conception of the end, either for the community of the individual, unless we have been holding views which long before Darwin were out of date. As to the principles of ethics I perceive, in short, no sign of revolution. Darwinism has indeed helped many to truer conception of the end, but I cannot admit that it has either originated or modified that conception.

And yet in ethics Darwinism after all may perhaps be revolutionary. I may lead not to another view about the end, but to a different way regarding the relatively importance of the means. For in the ordinary moral creed those means seem estimated on no rational principle. Our creed appears rather to be an irrational mixture of jarring elements. We have the moral code of Christianity, accepted in part; rejected practically by all save a few fanatics. But we do not realise how in its very principle the Christian ideals is false. And when we reject this code for another and in part a sounder morality, we are in the same condition of blindness and of practical confusion. It is here that Darwinism, with all the tendencies we may group under that name, seems destined to intervene. It will make itself felt, I believe, more and more effectually. It may force on us in some points a correction of our moral views, and a return to a non-Christian and perhaps a Hellenic ideal. I propose to illustrate here these general statements by some remarks on Punishment.

Darwinism, I have said, has not even modified our ideas of the Chief Good. We may take that as - the welfare of the community realised in its members. There is, of course, a question s to meaning to be given to welfare. We may identify that with mere pleasure, or gain with mere system, or may rather view both as inseparable aspects of perfection and individuality. And the extent and nature of the community would once more be a subject for some discussion. But we are forced to enter on these controversies here. We may leave welfare undefined, and for present purpose need not distinguish the community form the state. The welfare of this whole exists, or course, nowhere outside the individuals, and the individuals again have rights and duties only as members in the whole. This is the revived Hellenism — or we may call it the organic view of things — urged by German idealism early in the present century

CAT/1996(RC)

Question. 143

What, according to the passage, is the Chief Good?

Comprehension

Directions for Questions: Read the passage carefully and answer the given questions accordingly.

The membrane-bound nucleus is the most prominent feature of the eukaryotic cell. Schleiden and Schwann, when setting forth the cell doctrine in the 1830s, considered that it had a central role in growth and development. Their belief has been fully supported even though they had only vague notions as to what that role might be, and how the role was to be expressed in some cellular action. The membraneless nuclear area of the prokaryotic cell, with its tangle of fine threads, is now known to play a similar role.

Some cells, like the sieve tubes of vascular plants and the red blood cells of mammals, do not possess nuclei during the greater part of their existence, although they had nuclei when in a less differentiated state. Such cells can no longer divide and their life span is limited. Other cells are regularly multinucleate. Some, like the cells of striated muscles or the latex vessels of higher plants, become so through cell fusion. Some, like the unicellular protozoan Paramecium, are normally binucleate, one of the nuclei serving as a source of hereditary information for the next generation, the other governing the day-to-day metabolic activities of the cell. Still, other organisms, such as some fungi, are multinucleate because cross walls, dividing the mycelium into specific cells, and it would appear that this is the most efficient and most economical manner of partitioning living substance into manageable units. This point of view is given credence not only by the prevalence of uninucleate cells but because for each kind of cell there is a ratio maintained between the volume of the nucleus and that of the cytoplasm. If we think of the nucleus as the control centre of the cell, this would suggest that for a given kind of performing a given kind of work, one nucleus can “take care of” a specific volume of cytoplasm and keep it in functioning order. In terms of materials and energy, this must mean providing the kind of information needed to keep flow of materials and energy moving at the correct rate and in channels. With the multitude of enzymes in the cell, materials and energy can of course be channeled in a multitude of ways; it is the function of some informational molecules to make channels of use more preferred than others at any given time. How this regulatory control in exercise is not entirely clear.

The nucleus is generally a rounded body. In plant cells, however, where the center of the cell is often occupied by a large vacuole, the nucleus may be pushed against the cell wall, causing it to assume a lens shape. In some white blood cells, such as polymorphonucleated leukocytes, and in cells of the spinning glad of some insects and spiders, the nucleus is very much lobed. The reason for this is not clear, but it may relate to the fact that for a given volume of nucleus, a lobate form provides a much greater surface area nuclear-cytoplasmic exchanges, possibly affecting both the rate and the number of metabolic reactions. The nucleus, whatever its shape, is segregated from the cytoplasm by a double membrane, the nuclear envelope, with the two membranes separated from each other by a perinuclear space of varying width. The envelope is absent only during the time of cell division, and then just for a brief period. The outer membrane is often continued with the membranes of the endoplasmic reticulum, possible retention of an earlier relationship, since the envelope, at least in part, is formed at the end of cell division by coalescing fragments of the endoplasmic reticulum. The cytoplasmic side of the nucleus is frequently coated with ribosomes, another fact that stresses the similarity and relation of the nuclear envelope to the endoplasmic reticulum. The inner membranes seem to posses a crystalline layer where it abuts the nucleoplasm, but its function remains to be determined.

Everything that passes between the cytoplasm and the nucleus in the eukaryotic cell must transverse the nuclear envelope. The includes some fairly large molecules as well as bodies such as ribosomes, which measure about 25 mm in diameter. Some passageway is, therefore, obviously necessary since there is no indication of dissolution of the nuclear envelope in order to make such movement possible. The nuclear pores appear to be reasonable candidates for such a passageway. In plant cells, these are irregularly and rather sparsely distributed over the surface of the nucleus, but in the amphibian oocyte, for example, the pores are numerous, regularly arranged, and octagonal and are formed by the fusion of the outer and inner membrane.

CAT/1996(RC)

Question. 144

According to the first paragraph, the contention of Schleiden and Schwann that the nucleus is the most important part of the cell has ..........

Comprehension

Directions for Questions: Read the passage carefully and answer the given questions accordingly.

The membrane-bound nucleus is the most prominent feature of the eukaryotic cell. Schleiden and Schwann, when setting forth the cell doctrine in the 1830s, considered that it had a central role in growth and development. Their belief has been fully supported even though they had only vague notions as to what that role might be, and how the role was to be expressed in some cellular action. The membraneless nuclear area of the prokaryotic cell, with its tangle of fine threads, is now known to play a similar role.

Some cells, like the sieve tubes of vascular plants and the red blood cells of mammals, do not possess nuclei during the greater part of their existence, although they had nuclei when in a less differentiated state. Such cells can no longer divide and their life span is limited. Other cells are regularly multinucleate. Some, like the cells of striated muscles or the latex vessels of higher plants, become so through cell fusion. Some, like the unicellular protozoan Paramecium, are normally binucleate, one of the nuclei serving as a source of hereditary information for the next generation, the other governing the day-to-day metabolic activities of the cell. Still, other organisms, such as some fungi, are multinucleate because cross walls, dividing the mycelium into specific cells, and it would appear that this is the most efficient and most economical manner of partitioning living substance into manageable units. This point of view is given credence not only by the prevalence of uninucleate cells but because for each kind of cell there is a ratio maintained between the volume of the nucleus and that of the cytoplasm. If we think of the nucleus as the control centre of the cell, this would suggest that for a given kind of performing a given kind of work, one nucleus can “take care of” a specific volume of cytoplasm and keep it in functioning order. In terms of materials and energy, this must mean providing the kind of information needed to keep flow of materials and energy moving at the correct rate and in channels. With the multitude of enzymes in the cell, materials and energy can of course be channeled in a multitude of ways; it is the function of some informational molecules to make channels of use more preferred than others at any given time. How this regulatory control in exercise is not entirely clear.

The nucleus is generally a rounded body. In plant cells, however, where the center of the cell is often occupied by a large vacuole, the nucleus may be pushed against the cell wall, causing it to assume a lens shape. In some white blood cells, such as polymorphonucleated leukocytes, and in cells of the spinning glad of some insects and spiders, the nucleus is very much lobed. The reason for this is not clear, but it may relate to the fact that for a given volume of nucleus, a lobate form provides a much greater surface area nuclear-cytoplasmic exchanges, possibly affecting both the rate and the number of metabolic reactions. The nucleus, whatever its shape, is segregated from the cytoplasm by a double membrane, the nuclear envelope, with the two membranes separated from each other by a perinuclear space of varying width. The envelope is absent only during the time of cell division, and then just for a brief period. The outer membrane is often continued with the membranes of the endoplasmic reticulum, possible retention of an earlier relationship, since the envelope, at least in part, is formed at the end of cell division by coalescing fragments of the endoplasmic reticulum. The cytoplasmic side of the nucleus is frequently coated with ribosomes, another fact that stresses the similarity and relation of the nuclear envelope to the endoplasmic reticulum. The inner membranes seem to posses a crystalline layer where it abuts the nucleoplasm, but its function remains to be determined.

Everything that passes between the cytoplasm and the nucleus in the eukaryotic cell must transverse the nuclear envelope. The includes some fairly large molecules as well as bodies such as ribosomes, which measure about 25 mm in diameter. Some passageway is, therefore, obviously necessary since there is no indication of dissolution of the nuclear envelope in order to make such movement possible. The nuclear pores appear to be reasonable candidates for such a passageway. In plant cells, these are irregularly and rather sparsely distributed over the surface of the nucleus, but in the amphibian oocyte, for example, the pores are numerous, regularly arranged, and octagonal and are formed by the fusion of the outer and inner membrane.

CAT/1996(RC)

Question. 145

Which of the following kinds of the cells do not have nuclei?

Comprehension

Directions for Questions: Read the passage carefully and answer the given questions accordingly.

The membrane-bound nucleus is the most prominent feature of the eukaryotic cell. Schleiden and Schwann, when setting forth the cell doctrine in the 1830s, considered that it had a central role in growth and development. Their belief has been fully supported even though they had only vague notions as to what that role might be, and how the role was to be expressed in some cellular action. The membraneless nuclear area of the prokaryotic cell, with its tangle of fine threads, is now known to play a similar role.

Some cells, like the sieve tubes of vascular plants and the red blood cells of mammals, do not possess nuclei during the greater part of their existence, although they had nuclei when in a less differentiated state. Such cells can no longer divide and their life span is limited. Other cells are regularly multinucleate. Some, like the cells of striated muscles or the latex vessels of higher plants, become so through cell fusion. Some, like the unicellular protozoan Paramecium, are normally binucleate, one of the nuclei serving as a source of hereditary information for the next generation, the other governing the day-to-day metabolic activities of the cell. Still, other organisms, such as some fungi, are multinucleate because cross walls, dividing the mycelium into specific cells, and it would appear that this is the most efficient and most economical manner of partitioning living substance into manageable units. This point of view is given credence not only by the prevalence of uninucleate cells but because for each kind of cell there is a ratio maintained between the volume of the nucleus and that of the cytoplasm. If we think of the nucleus as the control centre of the cell, this would suggest that for a given kind of performing a given kind of work, one nucleus can “take care of” a specific volume of cytoplasm and keep it in functioning order. In terms of materials and energy, this must mean providing the kind of information needed to keep flow of materials and energy moving at the correct rate and in channels. With the multitude of enzymes in the cell, materials and energy can of course be channeled in a multitude of ways; it is the function of some informational molecules to make channels of use more preferred than others at any given time. How this regulatory control in exercise is not entirely clear.

The nucleus is generally a rounded body. In plant cells, however, where the center of the cell is often occupied by a large vacuole, the nucleus may be pushed against the cell wall, causing it to assume a lens shape. In some white blood cells, such as polymorphonucleated leukocytes, and in cells of the spinning glad of some insects and spiders, the nucleus is very much lobed. The reason for this is not clear, but it may relate to the fact that for a given volume of nucleus, a lobate form provides a much greater surface area nuclear-cytoplasmic exchanges, possibly affecting both the rate and the number of metabolic reactions. The nucleus, whatever its shape, is segregated from the cytoplasm by a double membrane, the nuclear envelope, with the two membranes separated from each other by a perinuclear space of varying width. The envelope is absent only during the time of cell division, and then just for a brief period. The outer membrane is often continued with the membranes of the endoplasmic reticulum, possible retention of an earlier relationship, since the envelope, at least in part, is formed at the end of cell division by coalescing fragments of the endoplasmic reticulum. The cytoplasmic side of the nucleus is frequently coated with ribosomes, another fact that stresses the similarity and relation of the nuclear envelope to the endoplasmic reticulum. The inner membranes seem to posses a crystalline layer where it abuts the nucleoplasm, but its function remains to be determined.

Everything that passes between the cytoplasm and the nucleus in the eukaryotic cell must transverse the nuclear envelope. The includes some fairly large molecules as well as bodies such as ribosomes, which measure about 25 mm in diameter. Some passageway is, therefore, obviously necessary since there is no indication of dissolution of the nuclear envelope in order to make such movement possible. The nuclear pores appear to be reasonable candidates for such a passageway. In plant cells, these are irregularly and rather sparsely distributed over the surface of the nucleus, but in the amphibian oocyte, for example, the pores are numerous, regularly arranged, and octagonal and are formed by the fusion of the outer and inner membrane.

CAT/1996(RC)

Question. 146

What is definitely a function of the nuclei of the normally binucleate cell?

Comprehension

Directions for Questions: Read the passage carefully and answer the given questions accordingly.

The membrane-bound nucleus is the most prominent feature of the eukaryotic cell. Schleiden and Schwann, when setting forth the cell doctrine in the 1830s, considered that it had a central role in growth and development. Their belief has been fully supported even though they had only vague notions as to what that role might be, and how the role was to be expressed in some cellular action. The membraneless nuclear area of the prokaryotic cell, with its tangle of fine threads, is now known to play a similar role.

Some cells, like the sieve tubes of vascular plants and the red blood cells of mammals, do not possess nuclei during the greater part of their existence, although they had nuclei when in a less differentiated state. Such cells can no longer divide and their life span is limited. Other cells are regularly multinucleate. Some, like the cells of striated muscles or the latex vessels of higher plants, become so through cell fusion. Some, like the unicellular protozoan Paramecium, are normally binucleate, one of the nuclei serving as a source of hereditary information for the next generation, the other governing the day-to-day metabolic activities of the cell. Still, other organisms, such as some fungi, are multinucleate because cross walls, dividing the mycelium into specific cells, and it would appear that this is the most efficient and most economical manner of partitioning living substance into manageable units. This point of view is given credence not only by the prevalence of uninucleate cells but because for each kind of cell there is a ratio maintained between the volume of the nucleus and that of the cytoplasm. If we think of the nucleus as the control centre of the cell, this would suggest that for a given kind of performing a given kind of work, one nucleus can “take care of” a specific volume of cytoplasm and keep it in functioning order. In terms of materials and energy, this must mean providing the kind of information needed to keep flow of materials and energy moving at the correct rate and in channels. With the multitude of enzymes in the cell, materials and energy can of course be channeled in a multitude of ways; it is the function of some informational molecules to make channels of use more preferred than others at any given time. How this regulatory control in exercise is not entirely clear.

The nucleus is generally a rounded body. In plant cells, however, where the center of the cell is often occupied by a large vacuole, the nucleus may be pushed against the cell wall, causing it to assume a lens shape. In some white blood cells, such as polymorphonucleated leukocytes, and in cells of the spinning glad of some insects and spiders, the nucleus is very much lobed. The reason for this is not clear, but it may relate to the fact that for a given volume of nucleus, a lobate form provides a much greater surface area nuclear-cytoplasmic exchanges, possibly affecting both the rate and the number of metabolic reactions. The nucleus, whatever its shape, is segregated from the cytoplasm by a double membrane, the nuclear envelope, with the two membranes separated from each other by a perinuclear space of varying width. The envelope is absent only during the time of cell division, and then just for a brief period. The outer membrane is often continued with the membranes of the endoplasmic reticulum, possible retention of an earlier relationship, since the envelope, at least in part, is formed at the end of cell division by coalescing fragments of the endoplasmic reticulum. The cytoplasmic side of the nucleus is frequently coated with ribosomes, another fact that stresses the similarity and relation of the nuclear envelope to the endoplasmic reticulum. The inner membranes seem to posses a crystalline layer where it abuts the nucleoplasm, but its function remains to be determined.

Everything that passes between the cytoplasm and the nucleus in the eukaryotic cell must transverse the nuclear envelope. The includes some fairly large molecules as well as bodies such as ribosomes, which measure about 25 mm in diameter. Some passageway is, therefore, obviously necessary since there is no indication of dissolution of the nuclear envelope in order to make such movement possible. The nuclear pores appear to be reasonable candidates for such a passageway. In plant cells, these are irregularly and rather sparsely distributed over the surface of the nucleus, but in the amphibian oocyte, for example, the pores are numerous, regularly arranged, and octagonal and are formed by the fusion of the outer and inner membrane.

CAT/1996(RC)

Question. 147

It may be inferred from the passage that the vast majority of cells are ....

Comprehension

Directions for Questions: Read the passage carefully and answer the given questions accordingly.

The membrane-bound nucleus is the most prominent feature of the eukaryotic cell. Schleiden and Schwann, when setting forth the cell doctrine in the 1830s, considered that it had a central role in growth and development. Their belief has been fully supported even though they had only vague notions as to what that role might be, and how the role was to be expressed in some cellular action. The membraneless nuclear area of the prokaryotic cell, with its tangle of fine threads, is now known to play a similar role.

Some cells, like the sieve tubes of vascular plants and the red blood cells of mammals, do not possess nuclei during the greater part of their existence, although they had nuclei when in a less differentiated state. Such cells can no longer divide and their life span is limited. Other cells are regularly multinucleate. Some, like the cells of striated muscles or the latex vessels of higher plants, become so through cell fusion. Some, like the unicellular protozoan Paramecium, are normally binucleate, one of the nuclei serving as a source of hereditary information for the next generation, the other governing the day-to-day metabolic activities of the cell. Still, other organisms, such as some fungi, are multinucleate because cross walls, dividing the mycelium into specific cells, and it would appear that this is the most efficient and most economical manner of partitioning living substance into manageable units. This point of view is given credence not only by the prevalence of uninucleate cells but because for each kind of cell there is a ratio maintained between the volume of the nucleus and that of the cytoplasm. If we think of the nucleus as the control centre of the cell, this would suggest that for a given kind of performing a given kind of work, one nucleus can “take care of” a specific volume of cytoplasm and keep it in functioning order. In terms of materials and energy, this must mean providing the kind of information needed to keep flow of materials and energy moving at the correct rate and in channels. With the multitude of enzymes in the cell, materials and energy can of course be channeled in a multitude of ways; it is the function of some informational molecules to make channels of use more preferred than others at any given time. How this regulatory control in exercise is not entirely clear.

The nucleus is generally a rounded body. In plant cells, however, where the center of the cell is often occupied by a large vacuole, the nucleus may be pushed against the cell wall, causing it to assume a lens shape. In some white blood cells, such as polymorphonucleated leukocytes, and in cells of the spinning glad of some insects and spiders, the nucleus is very much lobed. The reason for this is not clear, but it may relate to the fact that for a given volume of nucleus, a lobate form provides a much greater surface area nuclear-cytoplasmic exchanges, possibly affecting both the rate and the number of metabolic reactions. The nucleus, whatever its shape, is segregated from the cytoplasm by a double membrane, the nuclear envelope, with the two membranes separated from each other by a perinuclear space of varying width. The envelope is absent only during the time of cell division, and then just for a brief period. The outer membrane is often continued with the membranes of the endoplasmic reticulum, possible retention of an earlier relationship, since the envelope, at least in part, is formed at the end of cell division by coalescing fragments of the endoplasmic reticulum. The cytoplasmic side of the nucleus is frequently coated with ribosomes, another fact that stresses the similarity and relation of the nuclear envelope to the endoplasmic reticulum. The inner membranes seem to posses a crystalline layer where it abuts the nucleoplasm, but its function remains to be determined.

Everything that passes between the cytoplasm and the nucleus in the eukaryotic cell must transverse the nuclear envelope. The includes some fairly large molecules as well as bodies such as ribosomes, which measure about 25 mm in diameter. Some passageway is, therefore, obviously necessary since there is no indication of dissolution of the nuclear envelope in order to make such movement possible. The nuclear pores appear to be reasonable candidates for such a passageway. In plant cells, these are irregularly and rather sparsely distributed over the surface of the nucleus, but in the amphibian oocyte, for example, the pores are numerous, regularly arranged, and octagonal and are formed by the fusion of the outer and inner membrane.

CAT/1996(RC)

Question. 148

Why, according to the passage, are fungi multinucleate?

Comprehension

Directions for Questions: Read the passage carefully and answer the given questions accordingly.

The membrane-bound nucleus is the most prominent feature of the eukaryotic cell. Schleiden and Schwann, when setting forth the cell doctrine in the 1830s, considered that it had a central role in growth and development. Their belief has been fully supported even though they had only vague notions as to what that role might be, and how the role was to be expressed in some cellular action. The membraneless nuclear area of the prokaryotic cell, with its tangle of fine threads, is now known to play a similar role.

Some cells, like the sieve tubes of vascular plants and the red blood cells of mammals, do not possess nuclei during the greater part of their existence, although they had nuclei when in a less differentiated state. Such cells can no longer divide and their life span is limited. Other cells are regularly multinucleate. Some, like the cells of striated muscles or the latex vessels of higher plants, become so through cell fusion. Some, like the unicellular protozoan Paramecium, are normally binucleate, one of the nuclei serving as a source of hereditary information for the next generation, the other governing the day-to-day metabolic activities of the cell. Still, other organisms, such as some fungi, are multinucleate because cross walls, dividing the mycelium into specific cells, and it would appear that this is the most efficient and most economical manner of partitioning living substance into manageable units. This point of view is given credence not only by the prevalence of uninucleate cells but because for each kind of cell there is a ratio maintained between the volume of the nucleus and that of the cytoplasm. If we think of the nucleus as the control centre of the cell, this would suggest that for a given kind of performing a given kind of work, one nucleus can “take care of” a specific volume of cytoplasm and keep it in functioning order. In terms of materials and energy, this must mean providing the kind of information needed to keep flow of materials and energy moving at the correct rate and in channels. With the multitude of enzymes in the cell, materials and energy can of course be channeled in a multitude of ways; it is the function of some informational molecules to make channels of use more preferred than others at any given time. How this regulatory control in exercise is not entirely clear.

The nucleus is generally a rounded body. In plant cells, however, where the center of the cell is often occupied by a large vacuole, the nucleus may be pushed against the cell wall, causing it to assume a lens shape. In some white blood cells, such as polymorphonucleated leukocytes, and in cells of the spinning glad of some insects and spiders, the nucleus is very much lobed. The reason for this is not clear, but it may relate to the fact that for a given volume of nucleus, a lobate form provides a much greater surface area nuclear-cytoplasmic exchanges, possibly affecting both the rate and the number of metabolic reactions. The nucleus, whatever its shape, is segregated from the cytoplasm by a double membrane, the nuclear envelope, with the two membranes separated from each other by a perinuclear space of varying width. The envelope is absent only during the time of cell division, and then just for a brief period. The outer membrane is often continued with the membranes of the endoplasmic reticulum, possible retention of an earlier relationship, since the envelope, at least in part, is formed at the end of cell division by coalescing fragments of the endoplasmic reticulum. The cytoplasmic side of the nucleus is frequently coated with ribosomes, another fact that stresses the similarity and relation of the nuclear envelope to the endoplasmic reticulum. The inner membranes seem to posses a crystalline layer where it abuts the nucleoplasm, but its function remains to be determined.

Everything that passes between the cytoplasm and the nucleus in the eukaryotic cell must transverse the nuclear envelope. The includes some fairly large molecules as well as bodies such as ribosomes, which measure about 25 mm in diameter. Some passageway is, therefore, obviously necessary since there is no indication of dissolution of the nuclear envelope in order to make such movement possible. The nuclear pores appear to be reasonable candidates for such a passageway. In plant cells, these are irregularly and rather sparsely distributed over the surface of the nucleus, but in the amphibian oocyte, for example, the pores are numerous, regularly arranged, and octagonal and are formed by the fusion of the outer and inner membrane.

CAT/1996(RC)

Question. 149

Why, according to the passage, is polymorphonucleated leukocyte probably lobed?

Comprehension

Directions for Questions: Read the passage carefully and answer the given questions accordingly.

The membrane-bound nucleus is the most prominent feature of the eukaryotic cell. Schleiden and Schwann, when setting forth the cell doctrine in the 1830s, considered that it had a central role in growth and development. Their belief has been fully supported even though they had only vague notions as to what that role might be, and how the role was to be expressed in some cellular action. The membraneless nuclear area of the prokaryotic cell, with its tangle of fine threads, is now known to play a similar role.

Some cells, like the sieve tubes of vascular plants and the red blood cells of mammals, do not possess nuclei during the greater part of their existence, although they had nuclei when in a less differentiated state. Such cells can no longer divide and their life span is limited. Other cells are regularly multinucleate. Some, like the cells of striated muscles or the latex vessels of higher plants, become so through cell fusion. Some, like the unicellular protozoan Paramecium, are normally binucleate, one of the nuclei serving as a source of hereditary information for the next generation, the other governing the day-to-day metabolic activities of the cell. Still, other organisms, such as some fungi, are multinucleate because cross walls, dividing the mycelium into specific cells, and it would appear that this is the most efficient and most economical manner of partitioning living substance into manageable units. This point of view is given credence not only by the prevalence of uninucleate cells but because for each kind of cell there is a ratio maintained between the volume of the nucleus and that of the cytoplasm. If we think of the nucleus as the control centre of the cell, this would suggest that for a given kind of performing a given kind of work, one nucleus can “take care of” a specific volume of cytoplasm and keep it in functioning order. In terms of materials and energy, this must mean providing the kind of information needed to keep flow of materials and energy moving at the correct rate and in channels. With the multitude of enzymes in the cell, materials and energy can of course be channeled in a multitude of ways; it is the function of some informational molecules to make channels of use more preferred than others at any given time. How this regulatory control in exercise is not entirely clear.

The nucleus is generally a rounded body. In plant cells, however, where the center of the cell is often occupied by a large vacuole, the nucleus may be pushed against the cell wall, causing it to assume a lens shape. In some white blood cells, such as polymorphonucleated leukocytes, and in cells of the spinning glad of some insects and spiders, the nucleus is very much lobed. The reason for this is not clear, but it may relate to the fact that for a given volume of nucleus, a lobate form provides a much greater surface area nuclear-cytoplasmic exchanges, possibly affecting both the rate and the number of metabolic reactions. The nucleus, whatever its shape, is segregated from the cytoplasm by a double membrane, the nuclear envelope, with the two membranes separated from each other by a perinuclear space of varying width. The envelope is absent only during the time of cell division, and then just for a brief period. The outer membrane is often continued with the membranes of the endoplasmic reticulum, possible retention of an earlier relationship, since the envelope, at least in part, is formed at the end of cell division by coalescing fragments of the endoplasmic reticulum. The cytoplasmic side of the nucleus is frequently coated with ribosomes, another fact that stresses the similarity and relation of the nuclear envelope to the endoplasmic reticulum. The inner membranes seem to posses a crystalline layer where it abuts the nucleoplasm, but its function remains to be determined.

Everything that passes between the cytoplasm and the nucleus in the eukaryotic cell must transverse the nuclear envelope. The includes some fairly large molecules as well as bodies such as ribosomes, which measure about 25 mm in diameter. Some passageway is, therefore, obviously necessary since there is no indication of dissolution of the nuclear envelope in order to make such movement possible. The nuclear pores appear to be reasonable candidates for such a passageway. In plant cells, these are irregularly and rather sparsely distributed over the surface of the nucleus, but in the amphibian oocyte, for example, the pores are numerous, regularly arranged, and octagonal and are formed by the fusion of the outer and inner membrane.

CAT/1996(RC)

Question. 150

The function of the crystalline layer of the inner membrane of the nucleus is ............ 

Comprehension

Directions for Questions: Read the passage carefully and answer the given questions accordingly.

Icicles- two meters long and, at their tips, as bright and sharp as needless- hang from the eaves: wild ice stalactites, dragon’s teeth. I peer through them to see the world transformed to abstract whiteout. Little dervish snow tornadoes twirl across the blank. The car is out there somewhere, represented by a subtle bump in the snow-field. The old jeep truck, a larger beast, is up to its door handles, like a sinking remnant: dinosaur yelding to ice age. The town’s behemoth snow-plow passes on the road, dome light twirling, and casts aside a frozen doe that now lies, neck broken, upon the roadisde snow-bank, soon to vanish under the snowfall still to come.

There is double-jointed consiciousness at work in the dramatics of big weather. Down in the snowstorm, we are as mortal as the deer. I sink to my waist in a drift, I panic, my arms claw for an instant, like a drowning swimmer’s in the powder. Men up and down the storm collapse with cornoaries, snow shovels in their hands, gone a deathly colour, like frost-bitten plums.

Yet when we go upstairs to consult the Weather Channel, we settle down, as cosy gods do, to hover high above the earth and watch the play with a divine perspective. Moist air labelled L for low rides up the continent from the Gulf of Mexico and collides with the high that has slid down from the North Pole. And thus is whipped up the egg-white fluff on the studio map that, down in the frozen, messy world, buries mortals.

An odd new metaphysics of weather: It is not that weather has necessarily grown more apocalyptic. The famous “Winter of the Blue Snow” of 1886-87 turned rivers of the American West into glaciers that when they thawed, carried along inundation of dead cattle. President Theodore Roosevelt was virtually ruined as a rancher by the weather that destroyed 65% of his herd. In 1811 Mississippi River flowed northward briefly because of the New Madrid earthquake.

What’s new in America is the theatre of it. Television does not create weather any more than it creates contemporary politics. However, the ritual ceremonies of televised weather have endowed a subject often previously banal with an amazing life as mass entertainment, nationwide interactive precoccupation and a kind of immense performance art.

What we have is weather as electronic American Shintosim , a casual but almost mystic daily religion wherein nature is not inert but restless, stirring alive with kinetic fronts and meanings and turbulent expectations (forecasts, variables, prophecies). We have installed an elaborate priesthood and technology of interpretation: acolytes and satellites preside over snow and circuses. At least major snowstorms have about them an innocence and moral neutrality that is more refreshing than the last national television spectacle, the O.J. Simpson trial.

One attraction is the fact that these large gestures of nature are apolitical. The weather in its mirabilis mode can, of course, be dragged into the op-ed page to start a macro-argument about global warning or amicro-spat over a mayor’s fecklessness in deploying snowplows. Otherwise, traumas of weather do not admit of political interpretation. The snow Shinto reintroduces an element of what is almost charmingly uncontrollable in life. And, as shown last week, surprising , even as the priests predict it. This is welcome- a kind of idological relief-in a rather stupidly politicised society living under the delusion that everything in life (and death) is arguable, politicised society living under the delusion that everything in life (and death) is arguable, political and therefore manipulable - from diet to DNA . None of the old earthbound Marxist who-Whom here in meteorology, but rather sky gods that bang around at higher altitudes and leave the earth in its misery, to submit to the sloppy collateral damage.

The moral indifference of weather, even when destructive, is somehow stimulating. Why ? The sheer levelling force is pleassing. It overrides routine and organises people into a shared moment that will become a punctuating memory in their lives (“ Lord, remember the blizzard in ‘96?”).

Or perhaps one’s reaction is no more complicated than a child’s delight in dramatic disruption. Anyone loves to stand on the beach with a hurricane coing- a darkly lashing Byronism in surf and wind gets the blood up. The god’s or child , part of the mind welcomes big weather -floods and blizzards. The coping, grown-up-human part curses it, and sinks.

The paradox of big weather: it makes people feel important even while it dramatised their insignificance. In some ways, extreme weather is a brief moral equivalent of war- as stimulating as war can sometimes be, though without most of the carnage.

The sun rises upon diamond- scattered snow-fields and glistens upon the lucent dragon’s teeth. In the distance , three deer, roused from their shelter under pines, venture forth. They struggle and plunge undulously through the opulent white.

Upstairs, I switch on the Shinto Weather Channel and the priests at the map show me the next wave - white swirls and eddies over Indiana, heading ominously east.

CAT/1995(RC)

Question. 151

The author’s reaction to the snowstorm may be said to be

Comprehension

Directions for Questions: Read the passage carefully and answer the given questions accordingly.

Icicles- two meters long and, at their tips, as bright and sharp as needless- hang from the eaves: wild ice stalactites, dragon’s teeth. I peer through them to see the world transformed to abstract whiteout. Little dervish snow tornadoes twirl across the blank. The car is out there somewhere, represented by a subtle bump in the snow-field. The old jeep truck, a larger beast, is up to its door handles, like a sinking remnant: dinosaur yelding to ice age. The town’s behemoth snow-plow passes on the road, dome light twirling, and casts aside a frozen doe that now lies, neck broken, upon the roadisde snow-bank, soon to vanish under the snowfall still to come.

There is double-jointed consiciousness at work in the dramatics of big weather. Down in the snowstorm, we are as mortal as the deer. I sink to my waist in a drift, I panic, my arms claw for an instant, like a drowning swimmer’s in the powder. Men up and down the storm collapse with cornoaries, snow shovels in their hands, gone a deathly colour, like frost-bitten plums.

Yet when we go upstairs to consult the Weather Channel, we settle down, as cosy gods do, to hover high above the earth and watch the play with a divine perspective. Moist air labelled L for low rides up the continent from the Gulf of Mexico and collides with the high that has slid down from the North Pole. And thus is whipped up the egg-white fluff on the studio map that, down in the frozen, messy world, buries mortals.

An odd new metaphysics of weather: It is not that weather has necessarily grown more apocalyptic. The famous “Winter of the Blue Snow” of 1886-87 turned rivers of the American West into glaciers that when they thawed, carried along inundation of dead cattle. President Theodore Roosevelt was virtually ruined as a rancher by the weather that destroyed 65% of his herd. In 1811 Mississippi River flowed northward briefly because of the New Madrid earthquake.

What’s new in America is the theatre of it. Television does not create weather any more than it creates contemporary politics. However, the ritual ceremonies of televised weather have endowed a subject often previously banal with an amazing life as mass entertainment, nationwide interactive precoccupation and a kind of immense performance art.

What we have is weather as electronic American Shintosim , a casual but almost mystic daily religion wherein nature is not inert but restless, stirring alive with kinetic fronts and meanings and turbulent expectations (forecasts, variables, prophecies). We have installed an elaborate priesthood and technology of interpretation: acolytes and satellites preside over snow and circuses. At least major snowstorms have about them an innocence and moral neutrality that is more refreshing than the last national television spectacle, the O.J. Simpson trial.

One attraction is the fact that these large gestures of nature are apolitical. The weather in its mirabilis mode can, of course, be dragged into the op-ed page to start a macro-argument about global warning or amicro-spat over a mayor’s fecklessness in deploying snowplows. Otherwise, traumas of weather do not admit of political interpretation. The snow Shinto reintroduces an element of what is almost charmingly uncontrollable in life. And, as shown last week, surprising , even as the priests predict it. This is welcome- a kind of idological relief-in a rather stupidly politicised society living under the delusion that everything in life (and death) is arguable, politicised society living under the delusion that everything in life (and death) is arguable, political and therefore manipulable - from diet to DNA . None of the old earthbound Marxist who-Whom here in meteorology, but rather sky gods that bang around at higher altitudes and leave the earth in its misery, to submit to the sloppy collateral damage.

The moral indifference of weather, even when destructive, is somehow stimulating. Why ? The sheer levelling force is pleassing. It overrides routine and organises people into a shared moment that will become a punctuating memory in their lives (“ Lord, remember the blizzard in ‘96?”).

Or perhaps one’s reaction is no more complicated than a child’s delight in dramatic disruption. Anyone loves to stand on the beach with a hurricane coing- a darkly lashing Byronism in surf and wind gets the blood up. The god’s or child , part of the mind welcomes big weather -floods and blizzards. The coping, grown-up-human part curses it, and sinks.

The paradox of big weather: it makes people feel important even while it dramatised their insignificance. In some ways, extreme weather is a brief moral equivalent of war- as stimulating as war can sometimes be, though without most of the carnage.

The sun rises upon diamond- scattered snow-fields and glistens upon the lucent dragon’s teeth. In the distance , three deer, roused from their shelter under pines, venture forth. They struggle and plunge undulously through the opulent white.

Upstairs, I switch on the Shinto Weather Channel and the priests at the map show me the next wave - white swirls and eddies over Indiana, heading ominously east.

CAT/1995(RC)

Question. 152

How many vehicles does the author mention in the passage ?

Comprehension

Directions for Questions: Read the passage carefully and answer the given questions accordingly.

Icicles- two meters long and, at their tips, as bright and sharp as needless- hang from the eaves: wild ice stalactites, dragon’s teeth. I peer through them to see the world transformed to abstract whiteout. Little dervish snow tornadoes twirl across the blank. The car is out there somewhere, represented by a subtle bump in the snow-field. The old jeep truck, a larger beast, is up to its door handles, like a sinking remnant: dinosaur yelding to ice age. The town’s behemoth snow-plow passes on the road, dome light twirling, and casts aside a frozen doe that now lies, neck broken, upon the roadisde snow-bank, soon to vanish under the snowfall still to come.

There is double-jointed consiciousness at work in the dramatics of big weather. Down in the snowstorm, we are as mortal as the deer. I sink to my waist in a drift, I panic, my arms claw for an instant, like a drowning swimmer’s in the powder. Men up and down the storm collapse with cornoaries, snow shovels in their hands, gone a deathly colour, like frost-bitten plums.

Yet when we go upstairs to consult the Weather Channel, we settle down, as cosy gods do, to hover high above the earth and watch the play with a divine perspective. Moist air labelled L for low rides up the continent from the Gulf of Mexico and collides with the high that has slid down from the North Pole. And thus is whipped up the egg-white fluff on the studio map that, down in the frozen, messy world, buries mortals.

An odd new metaphysics of weather: It is not that weather has necessarily grown more apocalyptic. The famous “Winter of the Blue Snow” of 1886-87 turned rivers of the American West into glaciers that when they thawed, carried along inundation of dead cattle. President Theodore Roosevelt was virtually ruined as a rancher by the weather that destroyed 65% of his herd. In 1811 Mississippi River flowed northward briefly because of the New Madrid earthquake.

What’s new in America is the theatre of it. Television does not create weather any more than it creates contemporary politics. However, the ritual ceremonies of televised weather have endowed a subject often previously banal with an amazing life as mass entertainment, nationwide interactive precoccupation and a kind of immense performance art.

What we have is weather as electronic American Shintosim , a casual but almost mystic daily religion wherein nature is not inert but restless, stirring alive with kinetic fronts and meanings and turbulent expectations (forecasts, variables, prophecies). We have installed an elaborate priesthood and technology of interpretation: acolytes and satellites preside over snow and circuses. At least major snowstorms have about them an innocence and moral neutrality that is more refreshing than the last national television spectacle, the O.J. Simpson trial.

One attraction is the fact that these large gestures of nature are apolitical. The weather in its mirabilis mode can, of course, be dragged into the op-ed page to start a macro-argument about global warning or amicro-spat over a mayor’s fecklessness in deploying snowplows. Otherwise, traumas of weather do not admit of political interpretation. The snow Shinto reintroduces an element of what is almost charmingly uncontrollable in life. And, as shown last week, surprising , even as the priests predict it. This is welcome- a kind of idological relief-in a rather stupidly politicised society living under the delusion that everything in life (and death) is arguable, politicised society living under the delusion that everything in life (and death) is arguable, political and therefore manipulable - from diet to DNA . None of the old earthbound Marxist who-Whom here in meteorology, but rather sky gods that bang around at higher altitudes and leave the earth in its misery, to submit to the sloppy collateral damage.

The moral indifference of weather, even when destructive, is somehow stimulating. Why ? The sheer levelling force is pleassing. It overrides routine and organises people into a shared moment that will become a punctuating memory in their lives (“ Lord, remember the blizzard in ‘96?”).

Or perhaps one’s reaction is no more complicated than a child’s delight in dramatic disruption. Anyone loves to stand on the beach with a hurricane coing- a darkly lashing Byronism in surf and wind gets the blood up. The god’s or child , part of the mind welcomes big weather -floods and blizzards. The coping, grown-up-human part curses it, and sinks.

The paradox of big weather: it makes people feel important even while it dramatised their insignificance. In some ways, extreme weather is a brief moral equivalent of war- as stimulating as war can sometimes be, though without most of the carnage.

The sun rises upon diamond- scattered snow-fields and glistens upon the lucent dragon’s teeth. In the distance , three deer, roused from their shelter under pines, venture forth. They struggle and plunge undulously through the opulent white.

Upstairs, I switch on the Shinto Weather Channel and the priests at the map show me the next wave - white swirls and eddies over Indiana, heading ominously east.

CAT/1995(RC)

Question. 153

Which of the following was not the result of the ‘Winter of Blue Snow’ ?

Comprehension

Directions for Questions: Read the passage carefully and answer the given questions accordingly.

Icicles- two meters long and, at their tips, as bright and sharp as needless- hang from the eaves: wild ice stalactites, dragon’s teeth. I peer through them to see the world transformed to abstract whiteout. Little dervish snow tornadoes twirl across the blank. The car is out there somewhere, represented by a subtle bump in the snow-field. The old jeep truck, a larger beast, is up to its door handles, like a sinking remnant: dinosaur yelding to ice age. The town’s behemoth snow-plow passes on the road, dome light twirling, and casts aside a frozen doe that now lies, neck broken, upon the roadisde snow-bank, soon to vanish under the snowfall still to come.

There is double-jointed consiciousness at work in the dramatics of big weather. Down in the snowstorm, we are as mortal as the deer. I sink to my waist in a drift, I panic, my arms claw for an instant, like a drowning swimmer’s in the powder. Men up and down the storm collapse with cornoaries, snow shovels in their hands, gone a deathly colour, like frost-bitten plums.

Yet when we go upstairs to consult the Weather Channel, we settle down, as cosy gods do, to hover high above the earth and watch the play with a divine perspective. Moist air labelled L for low rides up the continent from the Gulf of Mexico and collides with the high that has slid down from the North Pole. And thus is whipped up the egg-white fluff on the studio map that, down in the frozen, messy world, buries mortals.

An odd new metaphysics of weather: It is not that weather has necessarily grown more apocalyptic. The famous “Winter of the Blue Snow” of 1886-87 turned rivers of the American West into glaciers that when they thawed, carried along inundation of dead cattle. President Theodore Roosevelt was virtually ruined as a rancher by the weather that destroyed 65% of his herd. In 1811 Mississippi River flowed northward briefly because of the New Madrid earthquake.

What’s new in America is the theatre of it. Television does not create weather any more than it creates contemporary politics. However, the ritual ceremonies of televised weather have endowed a subject often previously banal with an amazing life as mass entertainment, nationwide interactive precoccupation and a kind of immense performance art.

What we have is weather as electronic American Shintosim , a casual but almost mystic daily religion wherein nature is not inert but restless, stirring alive with kinetic fronts and meanings and turbulent expectations (forecasts, variables, prophecies). We have installed an elaborate priesthood and technology of interpretation: acolytes and satellites preside over snow and circuses. At least major snowstorms have about them an innocence and moral neutrality that is more refreshing than the last national television spectacle, the O.J. Simpson trial.

One attraction is the fact that these large gestures of nature are apolitical. The weather in its mirabilis mode can, of course, be dragged into the op-ed page to start a macro-argument about global warning or amicro-spat over a mayor’s fecklessness in deploying snowplows. Otherwise, traumas of weather do not admit of political interpretation. The snow Shinto reintroduces an element of what is almost charmingly uncontrollable in life. And, as shown last week, surprising , even as the priests predict it. This is welcome- a kind of idological relief-in a rather stupidly politicised society living under the delusion that everything in life (and death) is arguable, politicised society living under the delusion that everything in life (and death) is arguable, political and therefore manipulable - from diet to DNA . None of the old earthbound Marxist who-Whom here in meteorology, but rather sky gods that bang around at higher altitudes and leave the earth in its misery, to submit to the sloppy collateral damage.

The moral indifference of weather, even when destructive, is somehow stimulating. Why ? The sheer levelling force is pleassing. It overrides routine and organises people into a shared moment that will become a punctuating memory in their lives (“ Lord, remember the blizzard in ‘96?”).

Or perhaps one’s reaction is no more complicated than a child’s delight in dramatic disruption. Anyone loves to stand on the beach with a hurricane coing- a darkly lashing Byronism in surf and wind gets the blood up. The god’s or child , part of the mind welcomes big weather -floods and blizzards. The coping, grown-up-human part curses it, and sinks.

The paradox of big weather: it makes people feel important even while it dramatised their insignificance. In some ways, extreme weather is a brief moral equivalent of war- as stimulating as war can sometimes be, though without most of the carnage.

The sun rises upon diamond- scattered snow-fields and glistens upon the lucent dragon’s teeth. In the distance , three deer, roused from their shelter under pines, venture forth. They struggle and plunge undulously through the opulent white.

Upstairs, I switch on the Shinto Weather Channel and the priests at the map show me the next wave - white swirls and eddies over Indiana, heading ominously east.

CAT/1995(RC)

Question. 154

The author compares the weather bulletin channel reportage to

Comprehension

Directions for Questions: Read the passage carefully and answer the given questions accordingly.

Icicles- two meters long and, at their tips, as bright and sharp as needless- hang from the eaves: wild ice stalactites, dragon’s teeth. I peer through them to see the world transformed to abstract whiteout. Little dervish snow tornadoes twirl across the blank. The car is out there somewhere, represented by a subtle bump in the snow-field. The old jeep truck, a larger beast, is up to its door handles, like a sinking remnant: dinosaur yelding to ice age. The town’s behemoth snow-plow passes on the road, dome light twirling, and casts aside a frozen doe that now lies, neck broken, upon the roadisde snow-bank, soon to vanish under the snowfall still to come.

There is double-jointed consiciousness at work in the dramatics of big weather. Down in the snowstorm, we are as mortal as the deer. I sink to my waist in a drift, I panic, my arms claw for an instant, like a drowning swimmer’s in the powder. Men up and down the storm collapse with cornoaries, snow shovels in their hands, gone a deathly colour, like frost-bitten plums.

Yet when we go upstairs to consult the Weather Channel, we settle down, as cosy gods do, to hover high above the earth and watch the play with a divine perspective. Moist air labelled L for low rides up the continent from the Gulf of Mexico and collides with the high that has slid down from the North Pole. And thus is whipped up the egg-white fluff on the studio map that, down in the frozen, messy world, buries mortals.

An odd new metaphysics of weather: It is not that weather has necessarily grown more apocalyptic. The famous “Winter of the Blue Snow” of 1886-87 turned rivers of the American West into glaciers that when they thawed, carried along inundation of dead cattle. President Theodore Roosevelt was virtually ruined as a rancher by the weather that destroyed 65% of his herd. In 1811 Mississippi River flowed northward briefly because of the New Madrid earthquake.

What’s new in America is the theatre of it. Television does not create weather any more than it creates contemporary politics. However, the ritual ceremonies of televised weather have endowed a subject often previously banal with an amazing life as mass entertainment, nationwide interactive precoccupation and a kind of immense performance art.

What we have is weather as electronic American Shintosim , a casual but almost mystic daily religion wherein nature is not inert but restless, stirring alive with kinetic fronts and meanings and turbulent expectations (forecasts, variables, prophecies). We have installed an elaborate priesthood and technology of interpretation: acolytes and satellites preside over snow and circuses. At least major snowstorms have about them an innocence and moral neutrality that is more refreshing than the last national television spectacle, the O.J. Simpson trial.

One attraction is the fact that these large gestures of nature are apolitical. The weather in its mirabilis mode can, of course, be dragged into the op-ed page to start a macro-argument about global warning or amicro-spat over a mayor’s fecklessness in deploying snowplows. Otherwise, traumas of weather do not admit of political interpretation. The snow Shinto reintroduces an element of what is almost charmingly uncontrollable in life. And, as shown last week, surprising , even as the priests predict it. This is welcome- a kind of idological relief-in a rather stupidly politicised society living under the delusion that everything in life (and death) is arguable, politicised society living under the delusion that everything in life (and death) is arguable, political and therefore manipulable - from diet to DNA . None of the old earthbound Marxist who-Whom here in meteorology, but rather sky gods that bang around at higher altitudes and leave the earth in its misery, to submit to the sloppy collateral damage.

The moral indifference of weather, even when destructive, is somehow stimulating. Why ? The sheer levelling force is pleassing. It overrides routine and organises people into a shared moment that will become a punctuating memory in their lives (“ Lord, remember the blizzard in ‘96?”).

Or perhaps one’s reaction is no more complicated than a child’s delight in dramatic disruption. Anyone loves to stand on the beach with a hurricane coing- a darkly lashing Byronism in surf and wind gets the blood up. The god’s or child , part of the mind welcomes big weather -floods and blizzards. The coping, grown-up-human part curses it, and sinks.

The paradox of big weather: it makes people feel important even while it dramatised their insignificance. In some ways, extreme weather is a brief moral equivalent of war- as stimulating as war can sometimes be, though without most of the carnage.

The sun rises upon diamond- scattered snow-fields and glistens upon the lucent dragon’s teeth. In the distance , three deer, roused from their shelter under pines, venture forth. They struggle and plunge undulously through the opulent white.

Upstairs, I switch on the Shinto Weather Channel and the priests at the map show me the next wave - white swirls and eddies over Indiana, heading ominously east.

CAT/1995(RC)

Question. 155

According to the author, one of the greatest attractions of the weather is that

Comprehension

Directions for Questions: Read the passage carefully and answer the given questions accordingly.

Icicles- two meters long and, at their tips, as bright and sharp as needless- hang from the eaves: wild ice stalactites, dragon’s teeth. I peer through them to see the world transformed to abstract whiteout. Little dervish snow tornadoes twirl across the blank. The car is out there somewhere, represented by a subtle bump in the snow-field. The old jeep truck, a larger beast, is up to its door handles, like a sinking remnant: dinosaur yelding to ice age. The town’s behemoth snow-plow passes on the road, dome light twirling, and casts aside a frozen doe that now lies, neck broken, upon the roadisde snow-bank, soon to vanish under the snowfall still to come.

There is double-jointed consiciousness at work in the dramatics of big weather. Down in the snowstorm, we are as mortal as the deer. I sink to my waist in a drift, I panic, my arms claw for an instant, like a drowning swimmer’s in the powder. Men up and down the storm collapse with cornoaries, snow shovels in their hands, gone a deathly colour, like frost-bitten plums.

Yet when we go upstairs to consult the Weather Channel, we settle down, as cosy gods do, to hover high above the earth and watch the play with a divine perspective. Moist air labelled L for low rides up the continent from the Gulf of Mexico and collides with the high that has slid down from the North Pole. And thus is whipped up the egg-white fluff on the studio map that, down in the frozen, messy world, buries mortals.

An odd new metaphysics of weather: It is not that weather has necessarily grown more apocalyptic. The famous “Winter of the Blue Snow” of 1886-87 turned rivers of the American West into glaciers that when they thawed, carried along inundation of dead cattle. President Theodore Roosevelt was virtually ruined as a rancher by the weather that destroyed 65% of his herd. In 1811 Mississippi River flowed northward briefly because of the New Madrid earthquake.

What’s new in America is the theatre of it. Television does not create weather any more than it creates contemporary politics. However, the ritual ceremonies of televised weather have endowed a subject often previously banal with an amazing life as mass entertainment, nationwide interactive precoccupation and a kind of immense performance art.

What we have is weather as electronic American Shintosim , a casual but almost mystic daily religion wherein nature is not inert but restless, stirring alive with kinetic fronts and meanings and turbulent expectations (forecasts, variables, prophecies). We have installed an elaborate priesthood and technology of interpretation: acolytes and satellites preside over snow and circuses. At least major snowstorms have about them an innocence and moral neutrality that is more refreshing than the last national television spectacle, the O.J. Simpson trial.

One attraction is the fact that these large gestures of nature are apolitical. The weather in its mirabilis mode can, of course, be dragged into the op-ed page to start a macro-argument about global warning or amicro-spat over a mayor’s fecklessness in deploying snowplows. Otherwise, traumas of weather do not admit of political interpretation. The snow Shinto reintroduces an element of what is almost charmingly uncontrollable in life. And, as shown last week, surprising , even as the priests predict it. This is welcome- a kind of idological relief-in a rather stupidly politicised society living under the delusion that everything in life (and death) is arguable, politicised society living under the delusion that everything in life (and death) is arguable, political and therefore manipulable - from diet to DNA . None of the old earthbound Marxist who-Whom here in meteorology, but rather sky gods that bang around at higher altitudes and leave the earth in its misery, to submit to the sloppy collateral damage.

The moral indifference of weather, even when destructive, is somehow stimulating. Why ? The sheer levelling force is pleassing. It overrides routine and organises people into a shared moment that will become a punctuating memory in their lives (“ Lord, remember the blizzard in ‘96?”).

Or perhaps one’s reaction is no more complicated than a child’s delight in dramatic disruption. Anyone loves to stand on the beach with a hurricane coing- a darkly lashing Byronism in surf and wind gets the blood up. The god’s or child , part of the mind welcomes big weather -floods and blizzards. The coping, grown-up-human part curses it, and sinks.

The paradox of big weather: it makes people feel important even while it dramatised their insignificance. In some ways, extreme weather is a brief moral equivalent of war- as stimulating as war can sometimes be, though without most of the carnage.

The sun rises upon diamond- scattered snow-fields and glistens upon the lucent dragon’s teeth. In the distance, three deer roused from their shelter under pines, venture forth. They struggle and plunge undulously through the opulent white.

Upstairs, I switch on the Shinto Weather Channel, and the priests at the map show me the next wave - white swirls and eddies over Indiana, heading ominously east.

CAT/1995(RC)

Question. 156

The moral indifference of the weather is stimulating in spite of being destructive because

Comprehension

Directions for Questions: Read the passage carefully and answer the given questions accordingly.

Icicles- two meters long and, at their tips, as bright and sharp as needless- hang from the eaves: wild ice stalactites, dragon’s teeth. I peer through them to see the world transformed to abstract whiteout. Little dervish snow tornadoes twirl across the blank. The car is out there somewhere, represented by a subtle bump in the snow-field. The old jeep truck, a larger beast, is up to its door handles, like a sinking remnant: dinosaur yelding to ice age. The town’s behemoth snow-plow passes on the road, dome light twirling, and casts aside a frozen doe that now lies, neck broken, upon the roadisde snow-bank, soon to vanish under the snowfall still to come.

There is double-jointed consiciousness at work in the dramatics of big weather. Down in the snowstorm, we are as mortal as the deer. I sink to my waist in a drift, I panic, my arms claw for an instant, like a drowning swimmer’s in the powder. Men up and down the storm collapse with cornoaries, snow shovels in their hands, gone a deathly colour, like frost-bitten plums.

Yet when we go upstairs to consult the Weather Channel, we settle down, as cosy gods do, to hover high above the earth and watch the play with a divine perspective. Moist air labelled L for low rides up the continent from the Gulf of Mexico and collides with the high that has slid down from the North Pole. And thus is whipped up the egg-white fluff on the studio map that, down in the frozen, messy world, buries mortals.

An odd new metaphysics of weather: It is not that weather has necessarily grown more apocalyptic. The famous “Winter of the Blue Snow” of 1886-87 turned rivers of the American West into glaciers that when they thawed, carried along inundation of dead cattle. President Theodore Roosevelt was virtually ruined as a rancher by the weather that destroyed 65% of his herd. In 1811 Mississippi River flowed northward briefly because of the New Madrid earthquake.

What’s new in America is the theatre of it. Television does not create weather any more than it creates contemporary politics. However, the ritual ceremonies of televised weather have endowed a subject often previously banal with an amazing life as mass entertainment, nationwide interactive precoccupation and a kind of immense performance art.

What we have is weather as electronic American Shintosim , a casual but almost mystic daily religion wherein nature is not inert but restless, stirring alive with kinetic fronts and meanings and turbulent expectations (forecasts, variables, prophecies). We have installed an elaborate priesthood and technology of interpretation: acolytes and satellites preside over snow and circuses. At least major snowstorms have about them an innocence and moral neutrality that is more refreshing than the last national television spectacle, the O.J. Simpson trial.

One attraction is the fact that these large gestures of nature are apolitical. The weather in its mirabilis mode can, of course, be dragged into the op-ed page to start a macro-argument about global warning or amicro-spat over a mayor’s fecklessness in deploying snowplows. Otherwise, traumas of weather do not admit of political interpretation. The snow Shinto reintroduces an element of what is almost charmingly uncontrollable in life. And, as shown last week, surprising , even as the priests predict it. This is welcome- a kind of idological relief-in a rather stupidly politicised society living under the delusion that everything in life (and death) is arguable, politicised society living under the delusion that everything in life (and death) is arguable, political and therefore manipulable - from diet to DNA . None of the old earthbound Marxist who-Whom here in meteorology, but rather sky gods that bang around at higher altitudes and leave the earth in its misery, to submit to the sloppy collateral damage.

The moral indifference of weather, even when destructive, is somehow stimulating. Why ? The sheer levelling force is pleassing. It overrides routine and organises people into a shared moment that will become a punctuating memory in their lives (“ Lord, remember the blizzard in ‘96?”).

Or perhaps one’s reaction is no more complicated than a child’s delight in dramatic disruption. Anyone loves to stand on the beach with a hurricane coing- a darkly lashing Byronism in surf and wind gets the blood up. The god’s or child , part of the mind welcomes big weather -floods and blizzards. The coping, grown-up-human part curses it, and sinks.

The paradox of big weather: it makes people feel important even while it dramatised their insignificance. In some ways, extreme weather is a brief moral equivalent of war- as stimulating as war can sometimes be, though without most of the carnage.

The sun rises upon diamond- scattered snow-fields and glistens upon the lucent dragon’s teeth. In the distance, three deer roused from their shelter under pines, venture forth. They struggle and plunge undulously through the opulent white.

Upstairs, I switch on the Shinto Weather Channel, and the priests at the map show me the next wave - white swirls and eddies over Indiana, heading ominously east.

CAT/1995(RC)

Question. 157

Which of the following is not true of the weather?

Comprehension

Directions for Questions: Read the passage carefully and answer the given questions accordingly.

Icicles- two meters long and, at their tips, as bright and sharp as needless- hang from the eaves: wild ice stalactites, dragon’s teeth. I peer through them to see the world transformed to abstract whiteout. Little dervish snow tornadoes twirl across the blank. The car is out there somewhere, represented by a subtle bump in the snow-field. The old jeep truck, a larger beast, is up to its door handles, like a sinking remnant: dinosaur yelding to ice age. The town’s behemoth snow-plow passes on the road, dome light twirling, and casts aside a frozen doe that now lies, neck broken, upon the roadisde snow-bank, soon to vanish under the snowfall still to come.

There is double-jointed consiciousness at work in the dramatics of big weather. Down in the snowstorm, we are as mortal as the deer. I sink to my waist in a drift, I panic, my arms claw for an instant, like a drowning swimmer’s in the powder. Men up and down the storm collapse with cornoaries, snow shovels in their hands, gone a deathly colour, like frost-bitten plums.

Yet when we go upstairs to consult the Weather Channel, we settle down, as cosy gods do, to hover high above the earth and watch the play with a divine perspective. Moist air labelled L for low rides up the continent from the Gulf of Mexico and collides with the high that has slid down from the North Pole. And thus is whipped up the egg-white fluff on the studio map that, down in the frozen, messy world, buries mortals.

An odd new metaphysics of weather: It is not that weather has necessarily grown more apocalyptic. The famous “Winter of the Blue Snow” of 1886-87 turned rivers of the American West into glaciers that when they thawed, carried along inundation of dead cattle. President Theodore Roosevelt was virtually ruined as a rancher by the weather that destroyed 65% of his herd. In 1811 Mississippi River flowed northward briefly because of the New Madrid earthquake.

What’s new in America is the theatre of it. Television does not create weather any more than it creates contemporary politics. However, the ritual ceremonies of televised weather have endowed a subject often previously banal with an amazing life as mass entertainment, nationwide interactive precoccupation and a kind of immense performance art.

What we have is weather as electronic American Shintosim , a casual but almost mystic daily religion wherein nature is not inert but restless, stirring alive with kinetic fronts and meanings and turbulent expectations (forecasts, variables, prophecies). We have installed an elaborate priesthood and technology of interpretation: acolytes and satellites preside over snow and circuses. At least major snowstorms have about them an innocence and moral neutrality that is more refreshing than the last national television spectacle, the O.J. Simpson trial.

One attraction is the fact that these large gestures of nature are apolitical. The weather in its mirabilis mode can, of course, be dragged into the op-ed page to start a macro-argument about global warning or amicro-spat over a mayor’s fecklessness in deploying snowplows. Otherwise, traumas of weather do not admit of political interpretation. The snow Shinto reintroduces an element of what is almost charmingly uncontrollable in life. And, as shown last week, surprising , even as the priests predict it. This is welcome- a kind of idological relief-in a rather stupidly politicised society living under the delusion that everything in life (and death) is arguable, politicised society living under the delusion that everything in life (and death) is arguable, political and therefore manipulable - from diet to DNA . None of the old earthbound Marxist who-Whom here in meteorology, but rather sky gods that bang around at higher altitudes and leave the earth in its misery, to submit to the sloppy collateral damage.

The moral indifference of weather, even when destructive, is somehow stimulating. Why ? The sheer levelling force is pleassing. It overrides routine and organises people into a shared moment that will become a punctuating memory in their lives (“ Lord, remember the blizzard in ‘96?”).

Or perhaps one’s reaction is no more complicated than a child’s delight in dramatic disruption. Anyone loves to stand on the beach with a hurricane coing- a darkly lashing Byronism in surf and wind gets the blood up. The god’s or child , part of the mind welcomes big weather -floods and blizzards. The coping, grown-up-human part curses it, and sinks.

The paradox of big weather: it makes people feel important even while it dramatised their insignificance. In some ways, extreme weather is a brief moral equivalent of war- as stimulating as war can sometimes be, though without most of the carnage.

The sun rises upon diamond- scattered snow-fields and glistens upon the lucent dragon’s teeth. In the distance, three deer roused from their shelter under pines, venture forth. They struggle and plunge undulously through the opulent white.

Upstairs, I switch on the Shinto Weather Channel, and the priests at the map show me the next wave - white swirls and eddies over Indiana, heading ominously east.

CAT/1995(RC)

Question. 158

What is most probably the physical position of the author of the passage?

Comprehension

Directions for Questions: Read the passage carefully and answer the given questions accordingly.

Icicles- two meters long and, at their tips, as bright and sharp as needless- hang from the eaves: wild ice stalactites, dragon’s teeth. I peer through them to see the world transformed to abstract whiteout. Little dervish snow tornadoes twirl across the blank. The car is out there somewhere, represented by a subtle bump in the snow-field. The old jeep truck, a larger beast, is up to its door handles, like a sinking remnant: dinosaur yelding to ice age. The town’s behemoth snow-plow passes on the road, dome light twirling, and casts aside a frozen doe that now lies, neck broken, upon the roadisde snow-bank, soon to vanish under the snowfall still to come.

There is double-jointed consiciousness at work in the dramatics of big weather. Down in the snowstorm, we are as mortal as the deer. I sink to my waist in a drift, I panic, my arms claw for an instant, like a drowning swimmer’s in the powder. Men up and down the storm collapse with cornoaries, snow shovels in their hands, gone a deathly colour, like frost-bitten plums.

Yet when we go upstairs to consult the Weather Channel, we settle down, as cosy gods do, to hover high above the earth and watch the play with a divine perspective. Moist air labelled L for low rides up the continent from the Gulf of Mexico and collides with the high that has slid down from the North Pole. And thus is whipped up the egg-white fluff on the studio map that, down in the frozen, messy world, buries mortals.

An odd new metaphysics of weather: It is not that weather has necessarily grown more apocalyptic. The famous “Winter of the Blue Snow” of 1886-87 turned rivers of the American West into glaciers that when they thawed, carried along inundation of dead cattle. President Theodore Roosevelt was virtually ruined as a rancher by the weather that destroyed 65% of his herd. In 1811 Mississippi River flowed northward briefly because of the New Madrid earthquake.

What’s new in America is the theatre of it. Television does not create weather any more than it creates contemporary politics. However, the ritual ceremonies of televised weather have endowed a subject often previously banal with an amazing life as mass entertainment, nationwide interactive precoccupation and a kind of immense performance art.

What we have is weather as electronic American Shintosim , a casual but almost mystic daily religion wherein nature is not inert but restless, stirring alive with kinetic fronts and meanings and turbulent expectations (forecasts, variables, prophecies). We have installed an elaborate priesthood and technology of interpretation: acolytes and satellites preside over snow and circuses. At least major snowstorms have about them an innocence and moral neutrality that is more refreshing than the last national television spectacle, the O.J. Simpson trial.

One attraction is the fact that these large gestures of nature are apolitical. The weather in its mirabilis mode can, of course, be dragged into the op-ed page to start a macro-argument about global warning or amicro-spat over a mayor’s fecklessness in deploying snowplows. Otherwise, traumas of weather do not admit of political interpretation. The snow Shinto reintroduces an element of what is almost charmingly uncontrollable in life. And, as shown last week, surprising , even as the priests predict it. This is welcome- a kind of idological relief-in a rather stupidly politicised society living under the delusion that everything in life (and death) is arguable, politicised society living under the delusion that everything in life (and death) is arguable, political and therefore manipulable - from diet to DNA . None of the old earthbound Marxist who-Whom here in meteorology, but rather sky gods that bang around at higher altitudes and leave the earth in its misery, to submit to the sloppy collateral damage.

The moral indifference of weather, even when destructive, is somehow stimulating. Why ? The sheer levelling force is pleassing. It overrides routine and organises people into a shared moment that will become a punctuating memory in their lives (“ Lord, remember the blizzard in ‘96?”).

Or perhaps one’s reaction is no more complicated than a child’s delight in dramatic disruption. Anyone loves to stand on the beach with a hurricane coing- a darkly lashing Byronism in surf and wind gets the blood up. The god’s or child , part of the mind welcomes big weather -floods and blizzards. The coping, grown-up-human part curses it, and sinks.

The paradox of big weather: it makes people feel important even while it dramatised their insignificance. In some ways, extreme weather is a brief moral equivalent of war- as stimulating as war can sometimes be, though without most of the carnage.

The sun rises upon diamond- scattered snow-fields and glistens upon the lucent dragon’s teeth. In the distance, three deer roused from their shelter under pines, venture forth. They struggle and plunge undulously through the opulent white.

Upstairs, I switch on the Shinto Weather Channel, and the priests at the map show me the next wave - white swirls and eddies over Indiana, heading ominously east.

CAT/1995(RC)

Question. 159

The word ‘undulously’ in the context of the passage means

Comprehension

Directions for Questions: Read the passage carefully and answer the given questions accordingly.

The communities of ants are sometimes very large, numbering even up to 500,000 individuals; and it is a lesson to us that no one has ever yet seen a quarrel between any two ants belonging to the same community. On the other hand, it must be admitted that they are in hostility not only with most other insects, including ants of different species, but even with those of the same species if belonging to different communities. I have over and over again introduced ants from one of my nests into another nest of the same species; and they were invariably attacked, seized by a leg or an antenna, and dragged out.

It is evident, therefore, that the ants of each community all recognize one another, which is very remarkable. But more than this, I several times divided a nest into two halves and found that even after separation of a year and nine months they recognized one another and were perfectly friendly, while they at once attacked ants from a different nest, although of the same species.

It has been suggested that the ants of each nest have some sign or password by which they recognize one another. To test this I made some of them insensible, First I tried chloroform; but this was fatal to them, and I did not consider the test satisfactory. I decided therefore to intoxicate them. This was less easy than I had expected. None of my ants would voluntarily degrade themselves by getting drunk. However, I got over the difficulty by putting them into whiskey for a few moments. I took fifty specimens-twenty five percent from one nest and twenty five percent from another made them dead drunk, marked each with a spot of paint, and put them on a table close to where other ants from one of the nests were feeding. The table was surrounded as usual with a moat of water to prevent them from straying. The ants which were feeding, soon noticed those which I had made drunk. They seemed quite astonished to find their comrades in such a disgraceful condition, and as much at a loss to know what to do with their drunkards as we were. After a while, however, they carried them all away; the strangers they took to the edge of the moat and dropped into the water, while they bore their friends home into the nest, where by degrees they slept off the effects of the spirits. Thus it is evident that they know their friends even when incapable of giving any sign or password.

CAT/1994(RC)

Question. 160

The good title for this passage might be

Comprehension

Directions for Questions: Read the passage carefully and answer the given questions accordingly.

The communities of ants are sometimes very large, numbering even up to 500,000 individuals; and it is a lesson to us that no one has ever yet seen a quarrel between any two ants belonging to the same community. On the other hand, it must be admitted that they are in hostility not only with most other insects, including ants of different species, but even with those of the same species if belonging to different communities. I have over and over again introduced ants from one of my nests into another nest of the same species; and they were invariably attacked, seized by a leg or an antenna, and dragged out.

It is evident, therefore, that the ants of each community all recognize one another, which is very remarkable. But more than this, I several times divided a nest into two halves and found that even after separation of a year and nine months they recognized one another and were perfectly friendly, while they at once attacked ants from a different nest, although of the same species.

It has been suggested that the ants of each nest have some sign or password by which they recognize one another. To test this I made some of them insensible, First I tried chloroform; but this was fatal to them, and I did not consider the test satisfactory. I decided therefore to intoxicate them. This was less easy than I had expected. None of my ants would voluntarily degrade themselves by getting drunk. However, I got over the difficulty by putting them into whiskey for a few moments. I took fifty specimens-twenty five percent from one nest and twenty five percent from another made them dead drunk, marked each with a spot of paint, and put them on a table close to where other ants from one of the nests were feeding. The table was surrounded as usual with a moat of water to prevent them from straying. The ants which were feeding, soon noticed those which I had made drunk. They seemed quite astonished to find their comrades in such a disgraceful condition, and as much at a loss to know what to do with their drunkards as we were. After a while, however, they carried them all away; the strangers they took to the edge of the moat and dropped into the water, while they bore their friends home into the nest, where by degrees they slept off the effects of the spirits. Thus it is evident that they know their friends even when incapable of giving any sign or password.

CAT/1994(RC)

Question. 161

Attitude of ants toward strangers of the same species may be categorized as

Comprehension

Directions for Questions: Read the passage carefully and answer the given questions accordingly.

The communities of ants are sometimes very large, numbering even up to 500,000 individuals; and it is a lesson to us that no one has ever yet seen a quarrel between any two ants belonging to the same community. On the other hand, it must be admitted that they are in hostility not only with most other insects, including ants of different species, but even with those of the same species if belonging to different communities. I have over and over again introduced ants from one of my nests into another nest of the same species; and they were invariably attacked, seized by a leg or an antenna, and dragged out.

It is evident, therefore, that the ants of each community all recognize one another, which is very remarkable. But more than this, I several times divided a nest into two halves and found that even after separation of a year and nine months they recognized one another and were perfectly friendly, while they at once attacked ants from a different nest, although of the same species.

It has been suggested that the ants of each nest have some sign or password by which they recognize one another. To test this I made some of them insensible, First I tried chloroform; but this was fatal to them, and I did not consider the test satisfactory. I decided therefore to intoxicate them. This was less easy than I had expected. None of my ants would voluntarily degrade themselves by getting drunk. However, I got over the difficulty by putting them into whiskey for a few moments. I took fifty specimens-twenty five percent from one nest and twenty five percent from another made them dead drunk, marked each with a spot of paint, and put them on a table close to where other ants from one of the nests were feeding. The table was surrounded as usual with a moat of water to prevent them from straying. The ants which were feeding, soon noticed those which I had made drunk. They seemed quite astonished to find their comrades in such a disgraceful condition, and as much at a loss to know what to do with their drunkards as we were. After a while, however, they carried them all away; the strangers they took to the edge of the moat and dropped into the water, while they bore their friends home into the nest, where by degrees they slept off the effects of the spirits. Thus it is evident that they know their friends even when incapable of giving any sign or password.

CAT/1994(RC)

Question. 162

The author’s anecdotes of the inebriated ants would support all the following induction except the statement that

Comprehension

Directions for Questions: Read the passage carefully and answer the given questions accordingly.

The communities of ants are sometimes very large, numbering even up to 500,000 individuals; and it is a lesson to us that no one has ever yet seen a quarrel between any two ants belonging to the same community. On the other hand, it must be admitted that they are in hostility not only with most other insects, including ants of different species, but even with those of the same species if belonging to different communities. I have over and over again introduced ants from one of my nests into another nest of the same species; and they were invariably attacked, seized by a leg or an antenna, and dragged out.

It is evident, therefore, that the ants of each community all recognize one another, which is very remarkable. But more than this, I several times divided a nest into two halves and found that even after separation of a year and nine months they recognized one another and were perfectly friendly, while they at once attacked ants from a different nest, although of the same species.

It has been suggested that the ants of each nest have some sign or password by which they recognize one another. To test this I made some of them insensible, First I tried chloroform; but this was fatal to them, and I did not consider the test satisfactory. I decided therefore to intoxicate them. This was less easy than I had expected. None of my ants would voluntarily degrade themselves by getting drunk. However, I got over the difficulty by putting them into whiskey for a few moments. I took fifty specimens-twenty five percent from one nest and twenty five percent from another made them dead drunk, marked each with a spot of paint, and put them on a table close to where other ants from one of the nests were feeding. The table was surrounded as usual with a moat of water to prevent them from straying. The ants which were feeding, soon noticed those which I had made drunk. They seemed quite astonished to find their comrades in such a disgraceful condition, and as much at a loss to know what to do with their drunkards as we were. After a while, however, they carried them all away; the strangers they took to the edge of the moat and dropped into the water, while they bore their friends home into the nest, where by degrees they slept off the effects of the spirits. Thus it is evident that they know their friends even when incapable of giving any sign or password.

CAT/1994(RC)

Question. 163

According to the passage chloroform was less successful than alcohol for inhibiting communication because of

Comprehension

Directions for Questions: Read the passage carefully and answer the given questions accordingly.

The communities of ants are sometimes very large, numbering even up to 500,000 individuals; and it is a lesson to us that no one has ever yet seen a quarrel between any two ants belonging to the same community. On the other hand, it must be admitted that they are in hostility not only with most other insects, including ants of different species, but even with those of the same species if belonging to different communities. I have over and over again introduced ants from one of my nests into another nest of the same species; and they were invariably attacked, seized by a leg or an antenna, and dragged out.

It is evident, therefore, that the ants of each community all recognize one another, which is very remarkable. But more than this, I several times divided a nest into two halves and found that even after separation of a year and nine months they recognized one another and were perfectly friendly, while they at once attacked ants from a different nest, although of the same species.

It has been suggested that the ants of each nest have some sign or password by which they recognize one another. To test this I made some of them insensible, First I tried chloroform; but this was fatal to them, and I did not consider the test satisfactory. I decided therefore to intoxicate them. This was less easy than I had expected. None of my ants would voluntarily degrade themselves by getting drunk. However, I got over the difficulty by putting them into whiskey for a few moments. I took fifty specimens-twenty five percent from one nest and twenty five percent from another made them dead drunk, marked each with a spot of paint, and put them on a table close to where other ants from one of the nests were feeding. The table was surrounded as usual with a moat of water to prevent them from straying. The ants which were feeding, soon noticed those which I had made drunk. They seemed quite astonished to find their comrades in such a disgraceful condition, and as much at a loss to know what to do with their drunkards as we were. After a while, however, they carried them all away; the strangers they took to the edge of the moat and dropped into the water, while they bore their friends home into the nest, where by degrees they slept off the effects of the spirits. Thus it is evident that they know their friends even when incapable of giving any sign or password.

CAT/1994(RC)

Question. 164

Although the author is a scientist, his style of writing also exhibits a quality of

Comprehension

Directions for Questions: Read the passage carefully and answer the given questions accordingly.

Compared with other experimental sciences, astronomy has certain limitations. First, apart from meteorites, the Moon, and the nearer planets, the objects of study are inaccessible and cannot be manipulated, although nature sometimes provides special conditions, such as eclipse and other temporary effects. The astronomer must content himself with studying radiation emitted or reflected from celestial bodies.

Second, from the Earth’s surface, these are viewed through a thick atmosphere that completely absorbs most radiation except within certain “windows”, wavelength regions in which the radiation can pass through the atmosphere relatively freely in the optical, near-infrared, and radio bands of the electromagnetic spectrum; and even in these windows, the atmosphere has considerable effects. For light, these atmospheric effects are as follows : (1) some absorption that dims the radiation somewhat, even in a clear sky; (2) refraction, which causes a slight shift in the direction so that the object appears in a slightly different place; (3) scintillation (twinkling); i.e., fluctuation in the brightness of effectively point-like sources such as stars, fluctuations that are, however, averaged out for objects with larger images, such as planets (the ionosphere, an ionized layer high in the atmosphere, and interplanetary medium have similar effects on radio sources); (4) image movement because of atmospheric turbulence (“bad seeing”) spreads the image of a tiny point over an angle of nearly one arc second or more on the celestial sphere (one arc second equals 1/3,600 degrees); and (5) background light from the night sky. The obscuring effects of the atmosphere and its clouds are reduced by placing observing stations on mountains, preferably in desert regions (e.g., Southern California and Chile), and away from city lights. The effects are eliminated by observing high-altitude aircraft, balloons, rockets, space probes, and artificial satellites. From stations outside all or most of the atmosphere, gamma rays and X-rays that is, high-energy radiation at extremely short wave-lengths and far-ultraviolet and far-infrared radiation, all completely absorbed by the atmosphere at ground level observatories can be measured. At radio wavelengths between about one centimeter and 20 meters, the atmosphere (even when cloudy) has little effect, and man-made radio signals are the chief interference.

Third, the Earth is spinning, shifting, and wobbling platforms. Spin-on its axis causes the alternation of day and night and an apparent rotation of the celestial sphere with stars moving from east to west. Ground-based telescopes use a mounting that makes it possible to neutralize the rotation of Earth relative to the stars; with an equatorial mounting driven at a proper speed, the direction of the telescope tube can be kept constant for hours while the Earth turns under the mounting. Large radio telescopes usually have vertical and horizontal axes (altazimuth mounting), with their pointing continuously controlled by a computer.

In addition to the daily spin, there are much more gradual effects, called precession and nutation. Gravitational action of the Sun and Moon on the Earth’s equatorial bulge causes the Earth’s axis to precess like a top or gyroscope, gradually tracing out a circle on the celestial sphere in about 26,000 years, and also to nutate or wobble slightly in a period of 18.6 years. The Earth’s rotation and orbital motion provide the basic standard of directions of stars, so that uncertainties in the rate of these motions can lead to quite small but important uncertainties in measurements of stellar movements.

CAT/1994(RC)

Question. 165

One of the types of radiation that cannot pass through the atmospheric “windows” without distortion is

Comprehension

Directions for Questions: Read the passage carefully and answer the given questions accordingly.

Compared with other experimental sciences, astronomy has certain limitations. First, apart from meteorites, the Moon, and the nearer planets, the objects of study are inaccessible and cannot be manipulated, although nature sometimes provides special conditions, such as eclipse and other temporary effects. The astronomer must content himself with studying radiation emitted or reflected from celestial bodies.

Second, from the Earth’s surface, these are viewed through a thick atmosphere that completely absorbs most radiation except within certain “windows”, wavelength regions in which the radiation can pass through the atmosphere relatively freely in the optical, near-infrared, and radio bands of the electromagnetic spectrum; and even in these windows, the atmosphere has considerable effects. For light, these atmospheric effects are as follows : (1) some absorption that dims the radiation somewhat, even in a clear sky; (2) refraction, which causes a slight shift in the direction so that the object appears in a slightly different place; (3) scintillation (twinkling); i.e., fluctuation in the brightness of effectively point-like sources such as stars, fluctuations that are, however, averaged out for objects with larger images, such as planets (the ionosphere, an ionized layer high in the atmosphere, and interplanetary medium have similar effects on radio sources); (4) image movement because of atmospheric turbulence (“bad seeing”) spreads the image of a tiny point over an angle of nearly one arc second or more on the celestial sphere (one arc second equals 1/3,600 degrees); and (5) background light from the night sky. The obscuring effects of the atmosphere and its clouds are reduced by placing observing stations on mountains, preferably in desert regions (e.g., Southern California and Chile), and away from city lights. The effects are eliminated by observing high-altitude aircraft, balloons, rockets, space probes, and artificial satellites. From stations outside all or most of the atmosphere, gamma rays and X-rays that is, high-energy radiation at extremely short wave-lengths and far-ultraviolet and far-infrared radiation, all completely absorbed by the atmosphere at ground level observatories can be measured. At radio wavelengths between about one centimeter and 20 meters, the atmosphere (even when cloudy) has little effect, and man-made radio signals are the chief interference.

Third, the Earth is spinning, shifting, and wobbling platforms. Spin-on its axis causes the alternation of day and night and an apparent rotation of the celestial sphere with stars moving from east to west. Ground-based telescopes use a mounting that makes it possible to neutralize the rotation of Earth relative to the stars; with an equatorial mounting driven at a proper speed, the direction of the telescope tube can be kept constant for hours while the Earth turns under the mounting. Large radio telescopes usually have vertical and horizontal axes (altazimuth mounting), with their pointing continuously controlled by a computer.

In addition to the daily spin, there are much more gradual effects, called precession and nutation. Gravitational action of the Sun and Moon on the Earth’s equatorial bulge causes the Earth’s axis to precess like a top or gyroscope, gradually tracing out a circle on the celestial sphere in about 26,000 years, and also to nutate or wobble slightly in a period of 18.6 years. The Earth’s rotation and orbital motion provide the basic standard of directions of stars, so that uncertainties in the rate of these motions can lead to quite small but important uncertainties in measurements of stellar movements.

CAT/1994(RC)

Question. 166

One of the atmospheric effects that affect earth-based experiments that are not mentioned in the passage is

Comprehension

Directions for Questions: Read the passage carefully and answer the given questions accordingly.

Compared with other experimental sciences, astronomy has certain limitations. First, apart from meteorites, the Moon, and the nearer planets, the objects of study are inaccessible and cannot be manipulated, although nature sometimes provides special conditions, such as eclipse and other temporary effects. The astronomer must content himself with studying radiation emitted or reflected from celestial bodies.

Second, from the Earth’s surface, these are viewed through a thick atmosphere that completely absorbs most radiation except within certain “windows”, wavelength regions in which the radiation can pass through the atmosphere relatively freely in the optical, near-infrared, and radio bands of the electromagnetic spectrum; and even in these windows, the atmosphere has considerable effects. For light, these atmospheric effects are as follows : (1) some absorption that dims the radiation somewhat, even in a clear sky; (2) refraction, which causes a slight shift in the direction so that the object appears in a slightly different place; (3) scintillation (twinkling); i.e., fluctuation in the brightness of effectively point-like sources such as stars, fluctuations that are, however, averaged out for objects with larger images, such as planets (the ionosphere, an ionized layer high in the atmosphere, and interplanetary medium have similar effects on radio sources); (4) image movement because of atmospheric turbulence (“bad seeing”) spreads the image of a tiny point over an angle of nearly one arc second or more on the celestial sphere (one arc second equals 1/3,600 degrees); and (5) background light from the night sky. The obscuring effects of the atmosphere and its clouds are reduced by placing observing stations on mountains, preferably in desert regions (e.g., Southern California and Chile), and away from city lights. The effects are eliminated by observing high-altitude aircraft, balloons, rockets, space probes, and artificial satellites. From stations outside all or most of the atmosphere, gamma rays and X-rays that is, high-energy radiation at extremely short wave-lengths and far-ultraviolet and far-infrared radiation, all completely absorbed by the atmosphere at ground level observatories can be measured. At radio wavelengths between about one centimeter and 20 meters, the atmosphere (even when cloudy) has little effect, and man-made radio signals are the chief interference.

Third, the Earth is spinning, shifting, and wobbling platforms. Spin-on its axis causes the alternation of day and night and an apparent rotation of the celestial sphere with stars moving from east to west. Ground-based telescopes use a mounting that makes it possible to neutralize the rotation of Earth relative to the stars; with an equatorial mounting driven at a proper speed, the direction of the telescope tube can be kept constant for hours while the Earth turns under the mounting. Large radio telescopes usually have vertical and horizontal axes (altazimuth mounting), with their pointing continuously controlled by a computer.

In addition to the daily spin, there are much more gradual effects, called precession and nutation. Gravitational action of the Sun and Moon on the Earth’s equatorial bulge causes the Earth’s axis to precess like a top or gyroscope, gradually tracing out a circle on the celestial sphere in about 26,000 years, and also to nutate or wobble slightly in a period of 18.6 years. The Earth’s rotation and orbital motion provide the basic standard of directions of stars, so that uncertainties in the rate of these motions can lead to quite small but important uncertainties in measurements of stellar movements.

CAT/1994(RC)

Question. 167

The purpose of telescope mounting is to neutralize

Comprehension

Directions for Questions: Read the passage carefully and answer the given questions accordingly.

Compared with other experimental sciences, astronomy has certain limitations. First, apart from meteorites, the Moon, and the nearer planets, the objects of study are inaccessible and cannot be manipulated, although nature sometimes provides special conditions, such as eclipse and other temporary effects. The astronomer must content himself with studying radiation emitted or reflected from celestial bodies.

Second, from the Earth’s surface, these are viewed through a thick atmosphere that completely absorbs most radiation except within certain “windows”, wavelength regions in which the radiation can pass through the atmosphere relatively freely in the optical, near-infrared, and radio bands of the electromagnetic spectrum; and even in these windows, the atmosphere has considerable effects. For light, these atmospheric effects are as follows : (1) some absorption that dims the radiation somewhat, even in a clear sky; (2) refraction, which causes a slight shift in the direction so that the object appears in a slightly different place; (3) scintillation (twinkling); i.e., fluctuation in the brightness of effectively point-like sources such as stars, fluctuations that are, however, averaged out for objects with larger images, such as planets (the ionosphere, an ionized layer high in the atmosphere, and interplanetary medium have similar effects on radio sources); (4) image movement because of atmospheric turbulence (“bad seeing”) spreads the image of a tiny point over an angle of nearly one arc second or more on the celestial sphere (one arc second equals 1/3,600 degrees); and (5) background light from the night sky. The obscuring effects of the atmosphere and its clouds are reduced by placing observing stations on mountains, preferably in desert regions (e.g., Southern California and Chile), and away from city lights. The effects are eliminated by observing high-altitude aircraft, balloons, rockets, space probes, and artificial satellites. From stations outside all or most of the atmosphere, gamma rays and X-rays that is, high-energy radiation at extremely short wave-lengths and far-ultraviolet and far-infrared radiation, all completely absorbed by the atmosphere at ground level observatories can be measured. At radio wavelengths between about one centimeter and 20 meters, the atmosphere (even when cloudy) has little effect, and man-made radio signals are the chief interference.

Third, the Earth is spinning, shifting, and wobbling platforms. Spin-on its axis causes the alternation of day and night and an apparent rotation of the celestial sphere with stars moving from east to west. Ground-based telescopes use a mounting that makes it possible to neutralize the rotation of Earth relative to the stars; with an equatorial mounting driven at a proper speed, the direction of the telescope tube can be kept constant for hours while the Earth turns under the mounting. Large radio telescopes usually have vertical and horizontal axes (altazimuth mounting), with their pointing continuously controlled by a computer.

In addition to the daily spin, there are much more gradual effects, called precession and nutation. Gravitational action of the Sun and Moon on the Earth’s equatorial bulge causes the Earth’s axis to precess like a top or gyroscope, gradually tracing out a circle on the celestial sphere in about 26,000 years, and also to nutate or wobble slightly in a period of 18.6 years. The Earth’s rotation and orbital motion provide the basic standard of directions of stars, so that uncertainties in the rate of these motions can lead to quite small but important uncertainties in measurements of stellar movements.

CAT/1994(RC)

Question. 168

The precession period of Earth is

Comprehension

Directions for Questions: Read the passage carefully and answer the given questions accordingly.

Compared with other experimental sciences, astronomy has certain limitations. First, apart from meteorites, the Moon, and the nearer planets, the objects of study are inaccessible and cannot be manipulated, although nature sometimes provides special conditions, such as eclipse and other temporary effects. The astronomer must content himself with studying radiation emitted or reflected from celestial bodies.

Second, from the Earth’s surface, these are viewed through a thick atmosphere that completely absorbs most radiation except within certain “windows”, wavelength regions in which the radiation can pass through the atmosphere relatively freely in the optical, near-infrared, and radio bands of the electromagnetic spectrum; and even in these windows, the atmosphere has considerable effects. For light, these atmospheric effects are as follows : (1) some absorption that dims the radiation somewhat, even in a clear sky; (2) refraction, which causes a slight shift in the direction so that the object appears in a slightly different place; (3) scintillation (twinkling); i.e., fluctuation in the brightness of effectively point-like sources such as stars, fluctuations that are, however, averaged out for objects with larger images, such as planets (the ionosphere, an ionized layer high in the atmosphere, and interplanetary medium have similar effects on radio sources); (4) image movement because of atmospheric turbulence (“bad seeing”) spreads the image of a tiny point over an angle of nearly one arc second or more on the celestial sphere (one arc second equals 1/3,600 degrees); and (5) background light from the night sky. The obscuring effects of the atmosphere and its clouds are reduced by placing observing stations on mountains, preferably in desert regions (e.g., Southern California and Chile), and away from city lights. The effects are eliminated by observing high-altitude aircraft, balloons, rockets, space probes, and artificial satellites. From stations outside all or most of the atmosphere, gamma rays and X-rays that is, high-energy radiation at extremely short wave-lengths and far-ultraviolet and far-infrared radiation, all completely absorbed by the atmosphere at ground level observatories can be measured. At radio wavelengths between about one centimeter and 20 meters, the atmosphere (even when cloudy) has little effect, and man-made radio signals are the chief interference.

Third, the Earth is spinning, shifting, and wobbling platforms. Spin-on its axis causes the alternation of day and night and an apparent rotation of the celestial sphere with stars moving from east to west. Ground-based telescopes use a mounting that makes it possible to neutralize the rotation of Earth relative to the stars; with an equatorial mounting driven at a proper speed, the direction of the telescope tube can be kept constant for hours while the Earth turns under the mounting. Large radio telescopes usually have vertical and horizontal axes (altazimuth mounting), with their pointing continuously controlled by a computer.

In addition to the daily spin, there are much more gradual effects, called precession and nutation. Gravitational action of the Sun and Moon on the Earth’s equatorial bulge causes the Earth’s axis to precess like a top or gyroscope, gradually tracing out a circle on the celestial sphere in about 26,000 years, and also to nutate or wobble slightly in a period of 18.6 years. The Earth’s rotation and orbital motion provide the basic standard of directions of stars, so that uncertainties in the rate of these motions can lead to quite small but important uncertainties in measurements of stellar movements.

CAT/1994(RC)

Question. 169

Gravitational action of the Sun and the Moon on Earth causes

I. diurnal spinning

II. Precession

III. nutation

Comprehension

Directions for Questions: Read the passage carefully and answer the given questions accordingly.

Compared with other experimental sciences, astronomy has certain limitations. First, apart from meteorites, the Moon, and the nearer planets, the objects of study are inaccessible and cannot be manipulated, although nature sometimes provides special conditions, such as eclipse and other temporary effects. The astronomer must content himself with studying radiation emitted or reflected from celestial bodies.

Second, from the Earth’s surface, these are viewed through a thick atmosphere that completely absorbs most radiation except within certain “windows”, wavelength regions in which the radiation can pass through the atmosphere relatively freely in the optical, near-infrared, and radio bands of the electromagnetic spectrum; and even in these windows, the atmosphere has considerable effects. For light, these atmospheric effects are as follows : (1) some absorption that dims the radiation somewhat, even in a clear sky; (2) refraction, which causes a slight shift in the direction so that the object appears in a slightly different place; (3) scintillation (twinkling); i.e., fluctuation in the brightness of effectively point-like sources such as stars, fluctuations that are, however, averaged out for objects with larger images, such as planets (the ionosphere, an ionized layer high in the atmosphere, and interplanetary medium have similar effects on radio sources); (4) image movement because of atmospheric turbulence (“bad seeing”) spreads the image of a tiny point over an angle of nearly one arc second or more on the celestial sphere (one arc second equals 1/3,600 degrees); and (5) background light from the night sky. The obscuring effects of the atmosphere and its clouds are reduced by placing observing stations on mountains, preferably in desert regions (e.g., Southern California and Chile), and away from city lights. The effects are eliminated by observing high-altitude aircraft, balloons, rockets, space probes, and artificial satellites. From stations outside all or most of the atmosphere, gamma rays and X-rays that is, high-energy radiation at extremely short wave-lengths and far-ultraviolet and far-infrared radiation, all completely absorbed by the atmosphere at ground level observatories can be measured. At radio wavelengths between about one centimeter and 20 meters, the atmosphere (even when cloudy) has little effect, and man-made radio signals are the chief interference.

Third, the Earth is spinning, shifting, and wobbling platforms. Spin-on its axis causes the alternation of day and night and an apparent rotation of the celestial sphere with stars moving from east to west. Ground-based telescopes use a mounting that makes it possible to neutralize the rotation of Earth relative to the stars; with an equatorial mounting driven at a proper speed, the direction of the telescope tube can be kept constant for hours while the Earth turns under the mounting. Large radio telescopes usually have vertical and horizontal axes (altazimuth mounting), with their pointing continuously controlled by a computer.

In addition to the daily spin, there are much more gradual effects, called precession and nutation. Gravitational action of the Sun and Moon on the Earth’s equatorial bulge causes the Earth’s axis to precess like a top or gyroscope, gradually tracing out a circle on the celestial sphere in about 26,000 years, and also to nutate or wobble slightly in a period of 18.6 years. The Earth’s rotation and orbital motion provide the basic standard of directions of stars, so that uncertainties in the rate of these motions can lead to quite small but important uncertainties in measurements of stellar movements.

CAT/1994(RC)

Question. 170

The orbital motion of the Earth

Comprehension

Directions for Questions: Read the passage carefully and answer the given questions accordingly.

Compared with other experimental sciences, astronomy has certain limitations. First, apart from meteorites, the Moon, and the nearer planets, the objects of study are inaccessible and cannot be manipulated, although nature sometimes provides special conditions, such as eclipse and other temporary effects. The astronomer must content himself with studying radiation emitted or reflected from celestial bodies.

Second, from the Earth’s surface, these are viewed through a thick atmosphere that completely absorbs most radiation except within certain “windows”, wavelength regions in which the radiation can pass through the atmosphere relatively freely in the optical, near-infrared, and radio bands of the electromagnetic spectrum; and even in these windows, the atmosphere has considerable effects. For light, these atmospheric effects are as follows : (1) some absorption that dims the radiation somewhat, even in a clear sky; (2) refraction, which causes a slight shift in the direction so that the object appears in a slightly different place; (3) scintillation (twinkling); i.e., fluctuation in the brightness of effectively point-like sources such as stars, fluctuations that are, however, averaged out for objects with larger images, such as planets (the ionosphere, an ionized layer high in the atmosphere, and interplanetary medium have similar effects on radio sources); (4) image movement because of atmospheric turbulence (“bad seeing”) spreads the image of a tiny point over an angle of nearly one arc second or more on the celestial sphere (one arc second equals 1/3,600 degrees); and (5) background light from the night sky. The obscuring effects of the atmosphere and its clouds are reduced by placing observing stations on mountains, preferably in desert regions (e.g., Southern California and Chile), and away from city lights. The effects are eliminated by observing high-altitude aircraft, balloons, rockets, space probes, and artificial satellites. From stations outside all or most of the atmosphere, gamma rays and X-rays that is, high-energy radiation at extremely short wave-lengths and far-ultraviolet and far-infrared radiation, all completely absorbed by the atmosphere at ground level observatories can be measured. At radio wavelengths between about one centimeter and 20 meters, the atmosphere (even when cloudy) has little effect, and man-made radio signals are the chief interference.

Third, the Earth is spinning, shifting, and wobbling platforms. Spin-on its axis causes the alternation of day and night and an apparent rotation of the celestial sphere with stars moving from east to west. Ground-based telescopes use a mounting that makes it possible to neutralize the rotation of Earth relative to the stars; with an equatorial mounting driven at a proper speed, the direction of the telescope tube can be kept constant for hours while the Earth turns under the mounting. Large radio telescopes usually have vertical and horizontal axes (altazimuth mounting), with their pointing continuously controlled by a computer.

In addition to the daily spin, there are much more gradual effects, called precession and nutation. Gravitational action of the Sun and Moon on the Earth’s equatorial bulge causes the Earth’s axis to precess like a top or gyroscope, gradually tracing out a circle on the celestial sphere in about 26,000 years, and also to nutate or wobble slightly in a period of 18.6 years. The Earth’s rotation and orbital motion provide the basic standard of directions of stars, so that uncertainties in the rate of these motions can lead to quite small but important uncertainties in measurements of stellar movements.

CAT/1994(RC)

Question. 171

The man-made radio signals have wave-lengths of