-->

Science Globals

Denying the limits of Nature
Vaccination, Antibiotics and Sanitation
Medical and scientific evidence surrounding vaccination demonstrates that the benefits of preventing suffering and death from infectious diseases outweigh adverse effects of immunization.[1] Despite this, vaccine controversies began almost 80 years before the terms vaccine and vaccination were introduced, and they continue to this day. Opponents have questioned the effectiveness, safety, and necessity of all recommended vaccines. It is also argued that mandatory vaccinations violate individual rights to medical decisions and religious principles.[2] These arguments have reduced vaccination rates in certain communities, resulting in outbreaks of preventable and fatal childhood illnesses.[3][4][5]
The success of immunization programs depends on public confidence in their safety. Concerns about immunization safety often follow a pattern: some investigators suggest that a medical condition is an adverse effect of vaccination; a premature announcement is made of the alleged adverse effect; the initial study is not reproduced by other groups; and finally, it takes several years to regain public confidence in the vaccine.[1] The most recent and notable example of this pattern involvedAndrew Wakefield's discredited claims that the MMR vaccine causes autism.
Public reaction to vaccine controversies has contributed to a significant increase in preventable diseases, includingmeasles.[6]
Early attempts to prevent smallpox involved deliberate inoculation of the disease in hopes that a mild result would confer immunity. Originally called inoculation, this technique was later called variolation to avoid confusion with cowpox inoculation (vaccination) when that was introduced by Edward Jenner. Although variolation had a long history in China and India, it was first used in North America and England in 1721. Reverend Cotton Mather introduced variolation to Boston, Massachusetts, during the 1721 smallpox epidemic.[7] Many[citation needed] had religious objections, but Mather convinced Dr. Zabdiel Boylston to try it. Boylston first experimented on his 6-year-old son, his slave, and his slave's son; each subject contracted the disease and was sick for several days, until the sickness vanished and they were "no longer gravely ill".[7] Boylston went on to variolate thousands of Massachusetts residents, and many places were named for him in gratitude as a result. Lady Mary Wortley Montagu introduced variolation to England. She had seen it used in Turkey and, in 1718, had her son successfully variolated in Constantinople under the supervision of Dr. Charles Maitland. When she returned to England in 1721, she had her daughter variolated by Maitland. This aroused considerable interest, and Sir Hans Sloane organized the variolation of some inmates in Newgate Prison. These were successful, and after a further short trial in 1722, two daughters of Caroline of Ansbach Princess of Wales were variolated without mishap. With this royal approval, the procedure became common when smallpox epidemics threatened.[8]
Further, identification methods for potential pathogens were not available until the late 19th to early 20th century. Diseases later shown to be caused by contaminated vaccine included erysipelas, tuberculosis, tetanus, and syphilis. This last, though rare—estimated at 750 cases in 100 million vaccinations[13]—attracted particular attention. Much later, Dr. Charles Creighton, a leading medical opponent of vaccination, claimed that the vaccine itself was a cause of syphilis and devoted a whole book to the subject.[14] As cases of smallpox started to occur in those who had been vaccinated earlier, supporters of vaccination pointed out that these were usually very mild and occurred years after the vaccination.
Opposition to smallpox vaccination continued into the 20th century and was joined by controversy over new vaccines and the introduction ofantitoxin treatment for diphtheria. Injection of horse serum into humans as used in antitoxin can cause hypersensitivity, commonly referred to asserum sickness. Moreover, the continued production of smallpox vaccine in animals and the production of antitoxins in horses prompted anti-vivisectionists to oppose vaccination.
Diphtheria antitoxin was serum from horses that had been immunized against diphtheria, and was used to treat human cases by providing passive immunity. In 1901, antitoxin from a horse named Jim was contaminated with tetanus and killed 13 children in St Louis, Missouri. This incident, together with nine deaths from tetanus from contaminated smallpox vaccine in Camden, New Jersey, led directly and quickly to the passing of theBiologics Control Act in 1902.[33]
Robert Koch developed tuberculin in 1890. Inoculated into individuals who have had tuberculosis, it produces a hypersensitivity reaction, and is still used to detect those who have been infected. However, Koch used tuberculin as a vaccine. This caused serious reactions and deaths in individuals whose latent tuberculosis was reactivated by the tuberculin.[34] This was a major setback for supporters of new vaccines.[12]:30–31 Such incidents and others ensured that any untoward results concerning vaccination and related procedures received continued publicity, which grew as the number of new procedures increased.
Few deny the vast improvements vaccination has made to public health; a more common concern is their safety.[67] As with any medical treatment, there is a potential for vaccines to cause serious complications, such as severe allergic reactions,[68] but unlike most other medical interventions, vaccines are given to healthy people and so a higher standard of safety is expected.[69] While serious complications from vaccinations are possible, they are extremely rare and much less common than similar risks from the diseases they prevent.[42] As the success of immunization programs increases and the incidence of disease decreases, public attention shifts away from the risks of disease to the risk of vaccination,[1] and it becomes challenging for health authorities to preserve public support for vaccination programs.[70]
Concerns about immunization safety often follow a pattern. First, some investigators suggest that a medical condition of increasing prevalence or unknown cause is an adverse effect of vaccination. The initial study and subsequent studies by the same group have inadequate methodology—typically a poorly controlled or uncontrolled case series. A premature announcement is made about the alleged adverse effect, resonating with individuals suffering from the condition, and underestimating the potential harm of foregoing vaccination to those whom the vaccine could protect. Other groups attempt to replicated the initial study but fail to get the same results. Finally, it takes several years to regain public confidence in the vaccine.[1] Adverse effects ascribed to vaccines typically have an unknown origin, an increasing incidence, some biological plausibility, occurrences close to the time of vaccination, and dreaded outcomes.[71] In almost all cases, the public health effect is limited by cultural boundaries: English speakers worry about one vaccine causing autism, while French speakers worry about another vaccine causing multiple sclerosis, and Nigerians worry that a third vaccine causes infertility.[72]

Autism controversies[edit]

Despite significant media attention linking the causes of autism to some vaccines, vaccines do not cause autism.[1][73][74] A 2011 journal article described the vaccine-autism connection as "the most damaging medical hoax of the last 100 years".[75]

Thiomersal[edit]

Main article: Thiomersal controversy
Thiomersal is a preservative that some American parents believed caused autism. In 1999, the Centers for Disease Control (CDC) and the American Academy of Pediatrics(AAP) asked vaccine makers to remove the organomercury compound thiomersal (spelled "thimerosal" in the US) from vaccines as quickly as possible, and thiomersal has been phased out of US and European vaccines, except for some preparations of influenza vaccine.[76] The CDC and the AAP followed the precautionary principle, which assumes that there is no harm in exercising caution even if it later turns out to be unwarranted, but their 1999 action sparked confusion and controversy that has diverted attention and resources away from efforts to determine the causes of autism.[76] Since 2000, the thiomersal in child vaccines has been alleged to contribute to autism, and thousands of parents in the United States have pursued legal compensation from a federal fund.[77] A 2004 Institute of Medicine (IOM) committee favored rejecting any causal relationship between thiomersal-containing vaccines and autism.[78] Autism incidence rates increased steadily even after thiomersal was removed from childhood vaccines.[79] Currently there is no accepted scientific evidence that exposure to thiomersal is a factor in causing autism.[80]

MMR vaccine[edit]

Main article: MMR vaccine controversy
In the UK, the MMR vaccine was the subject of controversy after the publication in The Lancet of a 1998 paper by Andrew Wakefield and others reporting a study of 12 children mostly with autism spectrum disorders with onset soon after administration of the vaccine.[81] At a 1998 press conference, Wakefield suggested that giving children the vaccines in three separate doses would be safer than a single vaccination. This suggestion was not supported by the paper, and several subsequent peer-reviewed studies have failed to show any association between the vaccine and autism.[82] It later emerged that Wakefield had received funding from litigants against vaccine manufacturers and that he had not informed colleagues or medical authorities of his conflict of interest;[83] had this been known, publication in The Lancet would not have taken place in the way that it did.[84]Wakefield has been heavily criticized on scientific grounds and for triggering a decline in vaccination rates[85] (vaccination rates in the UK dropped to 80% in the years following the study),[63] as well as on ethical grounds for the way the research was conducted.[86] In 2004, the MMR-and-autism interpretation of the paper was formally retracted by 10 of Wakefield's 12 coauthors,[87] and in 2010 The Lancet‍ '​s editors fully retracted the paper.[88] Wakefield was struck off the UK medical register, with a statement identifying deliberate falsification in the research published in The Lancet,[89] and is barred from practising medicine in the UK.[90]
The CDC,[91] the IOM of the National Academy of Sciences,[78] and the UK National Health Service[92] have all concluded that there is no evidence of a link between the MMR vaccine and autism. A systematic review by the Cochrane Library concluded that there is no credible link between the MMR vaccine and autism, that MMR has prevented diseases that still carry a heavy burden of death and complications, that the lack of confidence in MMR has damaged public health, and that the design and reporting of safety outcomes in MMR vaccine studies are largely inadequate.[93]
In 2009, The Sunday Times reported that Wakefield had manipulated patient data and misreported results in his 1998 paper, creating the appearance of a link with autism.[94] A 2011 article in the British Medical Journal described how the data in the study had been falsified by Wakefield so that it would arrive at a predetermined conclusion.[95] An accompanying editorial in the same journal described Wakefield's work as an "elaborate fraud" that led to lower vaccination rates, putting hundreds of thousands of children at risk and diverting energy and money away from research into the true cause of autism.[96]
A special court convened in the United States to review claims under the National Vaccine Injury Compensation Program ruled on 12 February 2009 that parents of autistic children are not entitled to compensation in their contention that certain vaccines caused autism in their children.[97]

Vaccine overload[edit]

Vaccine overload is the notion that giving many vaccines at once may overwhelm or weaken a child's immature immune system and lead to adverse effects.[98] Despite scientific evidence that strongly contradicts this idea,[79] some parents of autistic children believe that vaccine overload causes autism.[99] The resulting controversy has caused many parents to delay or avoid immunizing their children.[98] Such parental misperceptions are major obstacles towards immunization of children.[100]
The concept of vaccine overload is flawed on several levels.[79] Despite the increase in the number of vaccines over recent decades, improvements in vaccine design have reduced the immunologic load from vaccines; the total number of immunological components in the 14 vaccines administered to US children in 2009 is less than 10% of what it was in the 7 vaccines given in 1980.[79] A study published in 2013 found no correlation between autism and the antigen number in the vaccines the children were administered up to the age of two. Of the 1,008 children in the study, one quarter of those diagnosed with autism were born between 1994 and 1999, when the routine vaccine schedule could contain more than 3,000 antigens (in a single shot of DTP vaccine). The vaccine schedule in 2012 contains several more vaccines, but the number of antigens the child is exposed to by the age of two is 315.[101][102] Vaccines pose a minuscule immunologic load compared to the pathogens naturally encountered by a child in a typical year;[79]common childhood conditions such as fevers and middle-ear infections pose a much greater challenge to the immune system than vaccines,[103] and studies have shown that vaccinations, even multiple concurrent vaccinations, do not weaken the immune system[79] or compromise overall immunity.[104] The lack of evidence supporting the vaccine overload hypothesis, combined with these findings directly contradicting it, has led to the conclusion that currently recommended vaccine programs do not "overload" or weaken the immune system.[1][105][106]
Any experiment based on withholding vaccines from children has been considered unethical,[107] and observational studies would likely be confounded by differences in the health care–seeking behaviours of under-vaccinated children. Thus, no study directly comparing rates of autism in vaccinated and unvaccinated children has been done. However, the concept of vaccine overload is biologically implausible, vaccinated and unvaccinated children have the same immune response to non-vaccine-related infections, and autism is not an immune-mediated disease, so claims that vaccines could cause it by overloading the immune system go against current knowledge of the pathogenesis of autism. As such, the idea that vaccines cause autism has been effectively dismissed by the weight of current evidence.[79]

Prenatal infection[edit]

There is evidence that schizophrenia is associated with prenatal exposure to rubella, influenza, and toxoplasmosis infection. For example, one study found a sevenfold increased risk of schizophrenia when mothers were exposed to influenza in the first trimester of gestation. This may have public health implications, as strategies for preventing infection include vaccination, antibiotics, and simple hygiene.[108] Based on studies in animal models, theoretical concerns have been raised about a possible link between schizophrenia and maternal immune response activated by virus antigens; a 2009 review concluded that there was insufficient evidence to recommend routine use of trivalent influenza vaccineduring the first trimester of pregnancy, but that the vaccine was still recommended outside the first trimester and in special circumstances such as pandemics or in women with certain other conditions.[109] The CDC's Advisory Committee on Immunization Practices, the American College of Obstetricians and Gynecologists, and the American Academy of Family Physicians all recommend routine flu shots for pregnant women, for several reasons:[110]
  • their risk for serious influenza-related medical complications during the last two trimesters;
  • their greater rates for flu-related hospitalizations compared to non-pregnant women;
  • the possible transfer of maternal anti-influenza antibodies to children, protecting the children from the flu; and
  • several studies that found no harm to pregnant women or their children from the vaccinations.
Despite this recommendation, only 16% of healthy pregnant US women surveyed in 2005 had been vaccinated against the flu.[110]

Aluminium

Aluminium compounds are used as immunologic adjuvants to increase the effectiveness of many vaccines.[111] In some cases these compounds have been associated with redness, itching, and low-grade fever,[111] but the use of aluminium in vaccines has not been associated with serious adverse events.[112] In some cases, aluminium-containing vaccines are associated with macrophagic myofasciitis (MMF), localized microscopic lesions containing aluminium salts that persist for up to 8 years. However, recent case-controlled studies have found no specific clinical symptoms in individuals with biopsies showing MMF, and there is no evidence that aluminium-containing vaccines are a serious health risk or justify changes to immunization practice.[112] Over the first six months of its life, an infant ingests more aluminium from dietary sources such as breast milk and infant formula than it does from vaccinations.[113][114]

Other safety concerns

Other safety concerns about vaccines have been published on the Internet, in informal meetings, in books, and at symposia. These include hypotheses that vaccination can cause sudden infant death syndrome, epileptic seizures, allergies, multiple sclerosis, and autoimmune diseases such as type 1 diabetes, as well as hypotheses that vaccinations can transmit bovine spongiform encephalopathy, Hepatitis C virus, and HIV. These hypotheses have been investigated, with the conclusion that currently used vaccines meet high safety standards and that criticism of vaccine safety in the popular press is not justified.
Transforming Body And Mind
Transhumanism (abbreviated as H+ or h+) is an international cultural and intellectual movement with an eventual goal of fundamentally transforming the human condition by developing and making widely available technologies to greatly enhance humanintellectual, physical, and psychological capacities.[1] Transhumanist thinkers study the potential benefits and dangers of emerging technologies that could overcome fundamental human limitations, as well as the ethics of developing and using such technologies.[2]The most common thesis put forward is that human beings may eventually be able to transform themselves into beings with such greatly expanded abilities as to merit the label posthuman.[1]
The contemporary meaning of the term transhumanism was foreshadowed by one of the first professors of futurology, FM-2030, who taught "new concepts of the human" at The New School in the 1960s, when he began to identify people who adopt technologies, lifestyles and worldviews "transitional" to posthumanity as "transhuman".[3] This hypothesis would lay the intellectual groundwork for the British philosopher Max More to begin articulating the principles of transhumanism as a futurist philosophy in 1990 and organizing in California an intelligentsia that has since grown into the worldwide transhumanist movement.[3][4][5]
Influenced by seminal works of science fiction, the transhumanist vision of a transformed future humanity has attracted many supporters and detractors from a wide range of perspectives.[3] Transhumanism has been characterized by one critic, Francis Fukuyama, as among the world's most dangerous ideas,[6] to which Ronald Bailey countered that it is rather the "movement that epitomizes the most daring, courageous, imaginative and idealistic aspirations of humanity".[7]
The concept of the technological singularity, or the ultra-rapid advent of superhuman intelligence, was first proposed by the British cryptologist I. J. Good in 1965:
Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make. [16]
Computer scientist Marvin Minsky wrote on relationships between human and artificial intelligence beginning in the 1960s.[17] Over the succeeding decades, this field continued to generate influential thinkers such as Hans Moravec and Raymond Kurzweil, who oscillated between the technical arena and futuristic speculations in the transhumanist vein.[18][19] The coalescence of an identifiable transhumanist movement began in the last decades of the 20th century. In 1966, FM-2030 (formerly F. M. Esfandiary), a futuristwho taught "new concepts of the human" at The New School, in New York City, began to identify people who adopt technologies, lifestyles and world views transitional toposthumanity as "transhuman".[20] In 1972, Robert Ettinger contributed to the conceptualization of "transhumanity" in his book Man into Superman.[21][22] FM-2030 published theUpwingers Manifesto in 1973
The first self-described transhumanists met formally in the early 1980s at the University of California, Los Angeles, which became the main center of transhumanist thought. Here, FM-2030 lectured on his "Third Way" futurist ideology. At the EZTV Media venue, frequented by transhumanists and other futurists, Natasha Vita-More presented Breaking Away, her 1980 experimental film with the theme of humans breaking away from their biological limitations and the Earth's gravity as they head into space.[24][25] FM-2030 and Vita-More soon began holding gatherings for transhumanists in Los Angeles, which included students from FM-2030's courses and audiences from Vita-More's artistic productions. In 1982, Vita-More authored the Transhumanist Arts Statement[26] and, six years later, produced the cable TV show TransCentury Update on transhumanity, a program which reached over 100,000 viewers.
In 1986, Eric Drexler published Engines of Creation: The Coming Era of Nanotechnology,[27] which discussed the prospects for nanotechnologyand molecular assemblers, and founded the Foresight Institute. As the first non-profit organization to research, advocate for, and performcryonics, the Southern California offices of the Alcor Life Extension Foundation became a center for futurists. In 1988, the first issue of Extropy Magazine was published by Max More and Tom Morrow. In 1990, More, a strategic philosopher, created his own particular transhumanist doctrine, which took the form of the Principles of Extropy,[28] and laid the foundation of modern transhumanism by giving it a new definition:[29]
Transhumanism is a class of philosophies that seek to guide us towards a posthuman condition. Transhumanism shares many elements of humanism, including a respect for reason and science, a commitment to progress, and a valuing of human (or transhuman) existence in this life. [...] Transhumanism differs from humanism in recognizing and anticipating the radical alterations in the nature and possibilities of our lives resulting from various sciences and technologies [...].
In 1992, More and Morrow founded the Extropy Institute, a catalyst for networking futurists and brainstorming new memeplexes by organizing a series of conferences and, more importantly, providing a mailing list, which exposed many to transhumanist views for the first time during the rise of cyberculture and the cyberdelic counterculture. In 1998, philosophers Nick Bostrom and David Pearce founded the World Transhumanist Association (WTA), an international non-governmental organization working toward the recognition of transhumanism as a legitimate subject of scientific inquiry and public policy.[30] In 2002, the WTA modified and adopted The Transhumanist Declaration.[31] The Transhumanist FAQ, prepared by the WTA, gave two formal definitions for transhumanism:[32]
  1. The intellectual and cultural movement that affirms the possibility and desirability of fundamentally improving the human condition through applied reason, especially by developing and making widely available technologies to eliminate aging and to greatly enhance human intellectual, physical, and psychological capacities.
  2. The study of the ramifications, promises, and potential dangers of technologies that will enable us to overcome fundamental human limitations, and the related study of the ethical matters involved in developing and using such technologies.
A number of similar definitions have been collected by Anders Sandberg, an academic and prominent transhumanist.[33]
In possible contrast with other transhumanist organizations, WTA officials considered that social forces could undermine their futurist visions and needed to be addressed.[3] A particular concern is the equal access to human enhancement technologies across classes and borders.[34] In 2006, a political struggle within the transhumanist movement between the libertarian right and the liberal left resulted in a more centre-leftward positioning of the WTA under its former executive director James Hughes.[34][35] In 2006, the board of directors of the Extropy Institute ceased operations of the organization, stating that its mission was "essentially completed".[36] This left the World Transhumanist Association as the leading international transhumanist organization. In 2008, as part of a rebranding effort, the WTA changed its name to "Humanity+".[37] Humanity+ and Betterhumans publish h+ Magazine, a periodical edited by R. U. Sirius which disseminates transhumanist news and ideas.[38][39] In 2012, the transhumanist Longevity Party had been initiated as an international union of people who promote the development of scientific and technological means to significant life extension, that for now has more than 30 national organisations throughout the world.[40][41]
Transhumanist-themed blogs by Zoltan Istvan are in mainstream media on Psychology Today, Vice's Motherboard, and The Huffington Post.[42][43][44] Istvan is the founder of theTranshumanist Party and is its 2016 US Presidential candidate.[45][46][47][48][49][50]
The first transhumanist elected member of a Parliament is Giuseppe Vatinno, in Italy.[51] In 2015, Vatinno became a member of the Board of Directors of Humanity+.[52]
It is a matter of debate whether transhumanism is a branch of posthumanism and how this philosophical movement should be conceptualised with regard to transhumanism. The latter is often referred to as a variant or activist form of posthumanism by its conservative,[6] Christian[53] and progressive[54][55] critics. Nevertheless, the idea of creating intelligent artificial beings (proposed, for example, by roboticist Hans Moravec) has influenced transhumanism.[18] Moravec's ideas and transhumanism have also been characterised as a "complacent" or "apocalyptic" variant of posthumanism and contrasted with "cultural posthumanism" in humanities and the arts.[57] While such a "cultural posthumanism" would offer resources for rethinking the relationships between humans and increasingly sophisticated machines, transhumanism and similar posthumanisms are, in this view, not abandoning obsolete concepts of the "autonomous liberal subject", but are expanding its "prerogatives" into the realm of theposthuman.[58] Transhumanist self-characterisations as a continuation of humanism and Enlightenment thinking correspond with this view. While many transhumanist theorists and advocates[who?] seek to apply reason, science and technology for the purposes of reducing poverty, disease, disability and malnutrition around the globe,[32] transhumanism is distinctive in its particular focus on the applications of technologies to the improvement of human bodies at the individual level. Many transhumanists[who?]actively assess the potential for future technologies and innovative social systems to improve the quality of all life, while seeking to make the material reality of the human condition fulfill the promise of legal and political equality by eliminating congenital.
Transhumanist philosophers[who?] argue that there not only exists a perfectionist ethical imperative for humans to strive for progress and improvement of the human condition, but that it is possible and desirable for humanity to enter a transhumanphase of existence in which humans are in control of their own evolution. In such a phase, natural evolution would be replaced with deliberate change.[citation needed]
Some theorists such as Raymond Kurzweil think that the pace of technological innovation is accelerating and that the next 50 years may yield not only radical technological advances, but possibly a technological singularity, which may fundamentally change the nature of human beings.[61] Transhumanists who foresee this massive technological change generally maintain that it is desirable. However, some are also concerned with the possible dangers of extremely rapid technological change and propose options for ensuring that advanced technology is used responsibly. For example, Bostrom has written extensively onexistential risks to humanity's future welfare, including ones that could be created by emerging technologies.
Transhumanists support the emergence and convergence of technologies including nanotechnology, biotechnology, information technology and cognitive science (NBIC), as well as hypothetical future technologies like simulated reality, artificial intelligence,superintelligence, mind uploading, chemical brain preservation and cryonics. They believe that humans can and should use these technologies to become more than human.[91] Therefore, they support the recognition and/or protection of cognitive liberty, morphological freedom and procreative liberty as civil liberties, so as to guarantee individuals the choice of using human enhancement technologies on themselves and their children.[92] Some speculate that human enhancement techniques and other emerging technologies may facilitate more radical human enhancement no later than at the midpoint of the 21st century. Kurzweil's book The Singularity is Near and Michio Kaku's book Physics of the Future outline various human enhancement technologies and give insight on how these technologies may impact the human race
Prosthesis and new techniques for defeating Paralysis
To help people suffering paralysis from injury, stroke or disease, scientists have invented brain-machine interfaces that record electrical signals of neurons in the brain and translate them to movement. Usually, that means the neural signals direct a device, like a robotic arm.
Cornell University researcher Maryam Shanechi, assistant professor of electrical and computer engineering, working with Ziv Williams, assistant professor of neurosurgery at Harvard Medical School, is bringing brain-machine interfaces to the next level: Instead of signals directing a device, she hopes to help paralyzed people move their own limb, just by thinking about it.
When paralyzed patients imagine or plan a movement, neurons in the brain's motor cortical areas still activate even though the communication link between the brain and muscles is broken. By implanting sensors in these brain areas, neural activity can be recorded and translated to the patient's desired movement using a mathematical transform called the decoder. These interfaces allow patients to generate movements directly with their thoughts.
In a paper published online Feb. 18 in Nature Communications, Shanechi, Williams and colleagues describe a cortical-spinal prosthesis that directs "targeted movement" in paralyzed limbs. The research team developed and tested a prosthesis that connects two subjects by enabling one subject to send its recorded neural activity to control limb movements in a different subject that is temporarily sedated. The demonstration is a step forward in making brain-machine interfaces for paralyzed humans to control their own limbs using their brain activity alone.
The brain-machine interface is based on a set of real-time decoding algorithms that process neural signals by predicting their targeted movements. In the experiment, one animal acted as the controller of the movement or the "master," then "decided" which target location to move to, and generated the neural activity that was decoded into this intended movement. The decoded movement was used to directly control the limb of the other animal by electrically stimulating its spinal cord.
"The problem here is not only that of decoding the recorded neural activity into the intended movement, but also that of properly stimulating the spinal cord to move the paralyzed limb according to the decoded movement," Shanechi said.
The scientists focused on decoding the target endpoint of the movement as opposed to its detailed kinematics. This allowed them to match the decoded target with a set of spinal stimulation parameters that generated limb movement toward that target. They demonstrated that the alert animal could produce two-dimensional movement in the sedated animal's limb -- a breakthrough in brain-machine interface research.
"By focusing on the target end point of movement as opposed to its detailed kinematics, we could reduce the complexity of solving for the appropriate spinal stimulation parameters, which helped us achieve this 2-D movement," Williams said.
Part of the experimental setup's novelty was using two different animals, rather than just one with a temporarily paralyzed limb. That way, the scientists contend that they have a true model of paralysis, since the master animal's brain and the sedated animal's limb had no physiological connection, as is the case for a paralyzed patient.
Shanechi's lab will continue developing more sophisticated brain-machine interface architectures with principled algorithmic designs and use them to construct high-performance prosthetics. These architectures could be used to control an external device or the native limb.
"The next step is to advance the development of brain-machine interface algorithms using the principles of control theory and statistical signal processing," Shanechi said. "Such brain-machine interface architectures could enable patients to generate complex movements using robotic arms or paralyzed limbs."
 a prosthesis (from Ancient Greek prósthesis, "addition, application, attachment")[1] is an artificial device that replaces a missing body part, which may be lost through trauma, disease, or congenital conditions. Prosthetic amputee rehabilitation is primarily coordinated by a prosthetist and an inter-disciplinary team of health care professionals including physiatrists, surgeons, physical therapists, and occupational therapists. Upper extremity prostheses are used at varying levels of amputation: forequarter, shoulder disarticulation, transhumeral prosthesis, elbow disarticulation, transradial prosthesis, wrist disarticulation, full hand, partial hand, finger, partial finger.
A transradial prosthesis is an artificial limb that replaces an arm missing below the elbow. Two main types of prosthetics are available. Cable operated limbs work by attaching a harness and cable around the opposite shoulder of the damaged arm. The other form of prosthetics available are myoelectric arms. These work by sensing, via electrodes, when the muscles in the upper arm move, causing an artificial hand to open or close. In the prosthetic industry a trans-radial prosthetic arm is often referred to as a "BE" or below elbow prosthesis.
Lower extremity prostheses provide replacements at varying levels of amputation. These include hip disarticulation, transfemoral prosthesis, knee disarticulation, transtibial prosthesis, Syme's amputation, foot, partial foot, and toe. The two main subcategories of lower extremity prosthetic devices are trans-tibial (any amputation transecting the tibia bone or a congenital anomaly resulting in a tibial deficiency) and trans-femoral (any amputation transecting the femur bone or a congenital anomaly resulting in a femoral deficiency)
Current high tech allows body powered arms to weigh around one-half to one-third of what a myoelectric arm does.

Sockets[edit]

Current body powered arms contain sockets that are built from hard epoxy or carbon fiber. These sockets or "interfaces" can be made more comfortable by lining them with a softer, compressible foam material that provides padding for the bone prominences. A self suspending or supra-condylar socket design is useful for those with short to mid range below elbow absence. Longer limbs may require the use of a locking roll-on type inner liner or more complex harnessing to help augment suspension.

Wrists[edit]

Wrist units are either screw-on connectors featuring the UNF 1/2-20 thread (USA) or quick release connector, of which there are different models.

Voluntary opening and voluntary closing[edit]

Two types of body powered systems exist, voluntary opening "pull to open" and voluntary closing "pull to close". Virtually all "split hook" prostheses operate with a voluntary opening type system.
More modern "prehensors" called GRIPS utilize voluntary closing systems. The differences are significant. Users of voluntary opening systems rely on elastic bands or springs for gripping force, while users of voluntary closing systems rely on their own body power and energy to create gripping force.
Voluntary closing users can generate prehension forces equivalent to the normal hand, upwards to or exceeding one hundred pounds. Voluntary closing GRIPS require constant tension to grip, like a human hand, and in that property they do come closer to matching human hand performance. Voluntary opening split hook users are limited to forces their rubber or springs can generate which usually is below twenty pounds.
The Power of Electric Light
An electric light is a device that produces visible light by the flow of electric current. It is the most common form of artificial lighting and is essential to modern society, providing interior lighting for buildings and exterior light for evening and nighttime activities. Before electric lighting became common in the early 20th century, people used candles, gas lights, oil lamps, andfires. Most electric lighting is powered by centrally generated electric power, but lighting may also be powered by mobile or standby electric generators or battery systems. Battery-powered lights, usually called "flashlights" or "torches", are used for portability and as backups when the main lights fail.
The two main categories of electric lights are incandescent lamps, which produce light by a filament heated white-hot by electric current, and gas-discharge lamps, which produce light by means of an electric arc through a gas. The energy efficiency of electric lighting has increased radically since the first demonstration of arc lamps and the incandescent light bulb of the 19th century. Modern electric light sources come in a profusion of types and sizes adapted to a myriad of applications. The word "lamp" can refer either to a light source or an or the appliance that holds the source.
The most efficient source of electric light is the low-pressure sodium lamp. It produces, for all practical purposes, a monochromatic orange/yellow light, which gives a similarly monochromatic perceprtion of any illuminated scene. For this reason, it is generally reserved for outdoor public lighting usages. Low-pressure sodium lights are favoured for public lighting by astronomers, since the light pollution that they generate can be easily filtered, contrary to broadband or continuous spectra.
Incandescent light bulb[edit]
Main article: Incandescent light bulb
The modern incandescent lightbulb, with a coiled filament of tungsten, was commercialized in the 1920s developed from the carbon filament lamp introduced in about 1880. As well as bulbs for normal illumination, there is a very wide range, including low voltage, low-power types often used as components in equipment, but now largely displaced by LEDs
There is currently interest in banning some types of filament lamp in some countries, such as Australia planning to ban standard incandescent light bulbs by 2010, because they are inefficient at converting electricity to light. Sri Lanka has already banned importing filament bulbs because of high use of electricity and less light. Less than 3% of the input energy is converted into usable light. Nearly all of the input energy ends up as heat that, in warm climates, must then be removed from the building by ventilation or air conditioning, often resulting in more energy consumption. In colder climates where heating and lighting is required during the cold and dark winter months, the heat byproduct has at least some value.
Halogen lamp[edit]
Main article: Halogen lamp
Halogen lamps are usually much smaller than standard incandescents, because for successful operation a bulb temperature over 200 °C is generally necessary. For this reason, most have a bulb of fused silica (quartz), but sometimes aluminosilicate glass. This is often sealed inside an additional layer of glass. The outer glass is a safety precaution, reducing UV emission and because halogen bulbs can occasionally explode during operation. One reason is if the quartz bulb has oily residue from fingerprints. The risk of burns or fire is also greater with bare bulbs, leading to their prohibition in some places unless enclosed by the luminaire.
Those designed for 12 V or 24 V operation have compact filaments, useful for good optical control, also they have higher efficiencies (lumens per watt) and better lives than non halogen types. The light output remains almost constant throughout life.
Fluorescent lamp[edit]
Main article: Fluorescent lamp
Fluorescent lamps consist of a glass tube that contains mercury vapour or argon under low pressure. Electricity flowing through the tube causes the gases to give off ultraviolet energy. The inside of the tubes are coated with phosphors that give off visible light when struck by ultraviolet energy.[1] have much higher efficiency than Incandescent lamps. For the same amount of light generated, they typically use around one-quarter to one-third the power of an incandescent.
LED lamp[edit]
Main article: Solid-state lighting
Solid state LEDs have been popular as indicator lights since the 1970s. In recent years, efficacy and output have risen to the point where LEDs are now being used in niche lighting applications.
Indicator LEDs are known for their extremely long life, up to 100,000 hours, but lighting LEDs are operated much less conservatively (due to high LED cost per watt), and consequently have much shorter lives.
Due to the relatively high cost per watt, LED lighting is most useful at very low powers, typically for lamp assemblies of under 10 W. LEDs are currently most useful and cost-effective in low power applications, such as nightlights and flashlights. Colored LEDs can also be used for accent lighting, such as for glass objects, and even in fake ice cubes for drinks at parties. They are also being increasingly used as holiday lighting.
LED efficiencies vary over a very wide range. Some have lower efficiency than filament lamps, and some significantly higher. LED performance in this respect is prone to being misinterpreted, as the inherent directionality of LEDs gives them a much higher light intensity in one direction per given total light output.
Single color LEDs are well developed technology, but white LEDs at time of writing still have some unresolved issues.
  1. CRI is not particularly good, resulting in less than accurate color rendition.
  2. The light distribution from the phosphor does not fully match the distribution of light from the LED die, so color temperature varies at differing angles.
  3. Phosphor performance degrades over time, resulting in change of color temperature and falling output. With some LEDs degradation can be quite fast.
  4. Limited heat tolerance means that the amount of power packable into a lamp assembly is a fraction of the power usable in a similarly sized incandescent lamp.
LED technology is useful for lighting designers because of its low power consumption, low heat generation, instantaneous on/off control, and in the case of single color LEDs, continuity of color throughout the life of the diode and relatively low cost of manufacture.
In the last few years, software has been developed to merge lighting and video by enabling lighting designers to stream video content to their LED fixtures, creating low resolution video walls.
For general domestic lighting, total cost of ownership of LED lighting is still much higher than for other well established lighting types.[dubious – discuss]
Carbon arc lamp[edit]
Main article: Arc lamp
Carbon arc lamps consist of two carbon rod electrodes in open air, supplied by a current-limiting ballast. The electric arc is struck by touching the rods then separating them. The ensuing arc heats the carbon tips to white heat. These lamps have higher efficiency than filament lamps, but the carbon rods are short lived and require constant adjustment in use. The lamps produce significant ultra-violet output, they require ventilation when used indoors, and due to their intensity they need protecting from direct sight.
Invented by Humphry Davy around 1805, the carbon arc was the first practical electric light. They were used commercially beginning in the 1870s for large building and street lighting until they were superseded in the early 20th century by the incandescent light. Carbon arc lamps operate at high powers and produce high intensity white light. They also are a point source of light. They remained in use in limited applications that required these properties, such as movie projectorsstage lighting, and searchlights, until after World War 2.
Discharge lamp[edit]
discharge lamp has a glass or silica envelope containing two metal electrodes separated by a gas. Gases used include, neonargonxenonsodiummetal halide, andmercury.
The core operating principle is much the same as the carbon arc lamp, but the term 'arc lamp' is normally used to refer to carbon arc lamps, with more modern types of gas discharge lamp normally called discharge lamps.
With some discharge lamps, very high voltage is used to strike the arc. This requires an electrical circuit called an igniter, which is part of the ballast circuitry. After the arc is struck, the internal resistance of the lamp drops to a low level, and the ballast limits the current to the operating current. Without a ballast, excess current would flow, causing rapid destruction of the lamp.
Some lamp types contain a little neon, which permits striking at normal running voltage, with no external ignition circuitry. Low pressure sodium lamps operate this way.
The simplest ballasts are just an inductor, and are chosen where cost is the deciding factor, such as street lighting. More advanced electronic ballasts may be designed to maintain constant light output over the life of the lamp, may drive the lamp with a square wave to maintain completely flicker-free output, and shut down in the event of certain faults.
Lamp life expectancy[edit]
Life expectancy is defined as the number of hours of operation for a lamp until 50% of them fail. This means that it is possible for some lamps to fail after a short amount of time and for some to last significantly longer than the rated lamp life. This is an average (median) life expectancy. Production tolerances as low as 1% can create a variance of 25% in lamp life. For LEDs, lamp life is when 50% of lamps have lumen output drop to 70% or less.
Lamps are also sensitive to switching cycles. The rapid heating of a lamp filament[dubious – discuss] or electrodes when a lamp is turned on is the most stressful event on the lamp. Most test cycles have the lamps on for 3 hours and then off for 20 minutes. (Some standard had to be used since it is unknown how the lamp will be used by consumers.) This switching cycle repeats until the lamps fail and the data is recorded. If switching is increased to only 1 hour on, the lamp life is usually reduced because the number of times the lamp has been turned on has increased. Rooms with frequent switching (bathroom, bedrooms, etc.) can expect much shorter lamp life than what is printed on the box.
Public lighting[edit]
The total amount of artificial light (especially from street light) is sufficient for cities to be easily visible at night from the air, and from space. This light is the source of light pollutionthat burdens astronomers and others.


The Impact of Reproductive Technology
Infertility is estimated to affect more than 80 million people worldwide, and while developments in reproductive technologies have evolved rapidly, so have the ethical, social and political controversies which surround nearly all aspects of their use’ (Vayena et al, 1997)


People have accepted the practice of various forms of fertility treatment for thousands of years. Despite this, controversy surrounds these new reproductive technologies because they challenge the traditional understanding of the relationship between sex and procreation. Consequentially, this also has the potential to challenge the structure of linage and kinship networks.


This report will investigate the reported and perceived social implications of some commonly used reproductive technologies currently used today; including contraception, in-vitro fertilisation, gamete intra-fallopian transfer, intra-cytoplasmic Sperm Injection, pre-implantation genetic diagnosis, gamete donation and abortion.


Equality of Access
Reproductive technologies have had a significant impact to the lives of many infertile and sub-fertile couples around the world. However, due to the high financial costs of these procedures, the access to these technologies is largely limited to Western society; particularly middle to high income earners. Consequentially, developing countries whom have the highest rates of infertility, have limited access to these technologies.


The use of these technologies is surrounded with controversy over the social implications involved. In the case of developing countries, some fear allowing access to these societies would lead to increased population growth in already overpopulated environments. A potential consequence of this would include further inequality to resource access, increased risk for the spread of disease, and subsequent extrapolation of financial costs.


However this ignites further controversy, as denying the access of these services is considered to violate a basic human right, established in the UN Declaration of Human Rights Article 16.1: Xvi, which states “men and women of full age, without any limitation due to race, nationality or religion, have the right to marry and to found a family.’ (Vayena et al, 1997)


In-Vitro Fertilisation
In-Vitro Fertilisation (IVF) is an assisted reproductive technology that has been used since the 1950s in animal breeding, and successfully produced its first human child in 1978 with the birth of Louise Brown.


The technique requires ovarian hyperstimulation in order to extract a number of developed ova from the ovaries. These are then fertilised external to the body, and the resulting embryo is replaced in the uterus several days later for implantation.


IVF is considered to have a notable impact on society, mainly due to its risks and ­­­social-evils. The risks of IVF have been well documented, and include multiple pregnancy, ectopic pregnancy, and ovarian hyperstimulation syndrome (OHSS).


The major outcome of IVF is that it has provided a means for many infertile couples/individuals to have children. However in doing so, there are concerns regarding the fertilisation of oocytes outside of the body. Not only is this viewed as unnatural, but it also requires extensive laboratory work in order to retrieve, fertilize and replace the resulting embryo.


Additionally, as with many assisted reproductive procedures, success entails an increased risk of having a multiple pregnancy, which has considerable increased health risks for the mother and fetuses. This is because more than one oocyte is often transferred into the fallopian tubes, with the potential for fertilization. This procedure also increases the risk of an ectopic pregnancy, miscarriage, premature birth and other complications. Therefore, it has the potential to lead to significant emotional and financial costs for the family and wider society. ‘It has been reported that average, hospital charges for a twin delivery were four times higher than for a singleton, whereas charges for a triplet delivery were eleven times higher. Additionally, there are long term costs associated with complications; including mental retardation, cerebral palsy, chronic problems with lung development and learning disabilities, which increase in frequency with pre-maturity.’ (Kaz et al 2002)


Another controversial issue is associated with age. There is debate over what age is too old for a person to undergo IVF in order to have a child, with reports of women utilizing its services after the onset of menopause. This raises concern for the mothers’ health in surviving the pregnancy, as well as their ability to survive long enough to raise the child.


Intra-cytoplasmic Sperm Injection
ICSI was introduced in 1992 and is considered to overcome the obstacles that IVF cannot. It allows clinically infertile men to have children without the use of a donor.


The process involves removal of tissue from the testes; on which a biopsy is carried out and sperm is removed. The fertilisation and implantation process occurs as for IVF, however it involves the risk of possible developmental problems in the offspring, ectopic pregnancy, and OHSS


.


The major concern for the use of ICSI to treat male-factor infertility is the belief that these infertile men will pass on their infertility to their offspring (particularly males), perpetuating the cycle of ART dependency in order to reproduce. There is the belief that if a person cannot naturally reproduce, then they are not meant to. However, you would have to consider if there actually is a gene for infertility, and if so, what is the likelihood of such a gene being actively passed on through the use of ICSI?


Gamete Intrafallopian Transfer
GIFT is similar to that of IVF where the woman's ovaries are stimulated to produce multiple oocytes at one time and then collected. Spermatozoa are also collected from the male partner or donor. The difference however lies in the process of fertilization. GIFT involves transferring the collected gametes into the woman’s fallopian tubes; allowing fertilization to occur as it would ‘naturally’. Consequentially, GIFT is the only form of ART that is supported by the Catholic Church; provided the spermatozoa is collected during intercourse.


The probability of a successful pregnancy using this method is not any better than with conventional IVF, and is not suitable for many causes of infertility; including blocked fallopian tubes, pelvic adhesions or server forms of male infertility. Nevertheless, this has a profound impact for infertile couples who want children, but who are unwilling to defy their beliefs. This is particularly the case for members of the Catholic faith, which was estimated to have a total worldwide population of 1.06 billion in 2001.


Although this procedure is reported to only have a 20% success rate, and is consequently responsible for substantial disappointment for many couples; its positive outcomes are argued to outweigh this issue. According to Wickipedia (2005), the inability to conceive often bears a stigma in many cultures around the world. Additionally, the anxiety and disappointment of having this knowledge often leads to marital discord. Therefore, this technology is believed to provide these couples with hope; which is argued to improve marital stability, resulting in a number of favourable social implications.


However, there is argument that this technology will provide an incentive for couples in western societies to prolong age of first conception, which is already an observed trend. This has the potential to slow population growth, and possibly hinder the populations progress and productivity in the future. However, the success of this procedure becomes significantly less effective with increasing age of the female. Providing this information becomes public knowledge, it is unlikely to cause a significant effect on wider society.


This procedure also involves an increased risk multiple pregnancy and complications; including ectopic pregnancy, miscarriage and premature birth. As mentioned previously, this can entail significant individual and social implications.




Donars
The donation of gametes and embryos to infertile couples has proved to have significant success rates in obtaining a successful pregnancy. However, there are a number of concerns associated with its use.


By tradition, parents create children. However, this technology has challenged this belief by redefining the concept; that it is children who create parents.’(Edwards et al 1993) As a result, the use of donors challenges many social concepts associated with kinship and lineage.


Some religions, such as the Catholic faith, consider the donation of gametes to constitute the interference of a third party in the ‘holiness’ of marriage. Therefore, ‘a couple confronted with the possibility of a sperm or oocyte donation must overcome a symbolic barrier of adultery.’ (Englert et al 2004) Additionally, this removes a partner’s biological interest in the child, and has created instances of custody debate between the genetic parent and birth parent.


This has ignited debate over the issue of anonymity. Some believe it is up to the parents who raise the child to decide whether or not to disclose the information. However, according to Englert et al (2004), ‘non-anonymity is gaining grounds, mainly because (true or not), in a society that gives more and more space to genetics, it is believed that knowing your genetic origin is an important part of knowing who you are, and that knowing the identity of her or his donor is part of your wellbeing.’


Additionally, according to Edwards et al (1993), the use of this technology instigates the controversial issue of virgin births, where women who do not want to have sexual relations, can have the option of having children. This opens the ethical debate over same sex parenting.


Couples generally prefer to be a relation of the donar. This reflects the importance societies have placed on genetic heritability. However, there are differing opinions associated with what is acceptable for each sex gamete. Some consider oocyte donation more acceptable, as it is considered an asexual process; therefore avoiding the perception of adultery. In addition, ‘one study found that 86% of the women and 66% of their partners in recipient couples favored using a sister for oocyte donation, but 9% of the women and 14% of the men expressed the same preference using a brother for sperm donation.’ (Englert et al 2004) Edwards et al (1993) found similar results, they contributed it to sexual competition; where it was considered to create a closer bond between sisters, but conflict between brothers.


Another issue is associated with selling gametes as commodities. Oocytes are reported to be most highly paid for, largely because it requires more effort from the female donor, and incurs some potential health risks. However, this has resulted in placing a premium on women who are in good health, and who appear to be a ‘good investment’. As a result, this has increased the risk of donars to hide possible health problems, which has potentially detrimental health effects for the couple, and resultant child.


There is also concern about consanguinity between offspring of recipients from the same donor. According to Borrero, C (2003) ‘this is a problem in small communities in which a very limited supply of donors is available. It has been suggested that in a population of 800 000, limiting a single donor to no more than 25 pregnancies would avoid inadvertent consanguineous conception.’


Pre-Implantation Genetic Diagnosis
Pre-Implantation Genetic Diagnosis (PGD) “provides the opportunity for couples at risk of having a child with a serious genetic condition, to start a pregnancy with the knowledge their embryos will not be affected with the indicated disease” (Cram and de Kretser 2002, pg. 194).


While this is the major focus of PGD, fears are held that it will be used to make ‘designer babies’; whom adhere to certain requirements desired by the parents (ie IQ, hair and eye colour, athletic ability, etc). Currently, this is not possible, but debate over the societal impact of such a prospect has been overwhelming.


However, PGD does have the ability to determine the sex of the embryo; well before it develops into a fetus and gender testing can be carried out via ultra-sound. This leaves open the way for people to choose the sex of their child, and dispose of embryos that are not of the desired sex. Allowing couples to determine the make up of their family through PGD and IVF is currently prevented in Australia through legislation; but sex-selection on the basis of a sex-linked chromosomal disorder has been allowed.


Biased sex-selection could have considerable implications to society, through Altering population demographics and sex ratios. There could also be consequences for the family unit, as the technology is not 100% accurate. Parents holding high expectations of having a certain sex offspring would be emotionally affected by having a child of the other sex, not to mention the parental investment issues the child would subsequently face.








Contraception
Contraception has allowed people to have control over their own fertility. Therefore, people are able to make an attempt to avoid pregnancy at times when they do not plan to have children; or to plan and choose the number of children they wish to have (IPPF, p17). There are many different techniques encompassed by the term contraception.


Natural Family Planning Techniques are methods of contraception which the Catholic Church strongly promotes; they do not require synthetic measures rather they focus on periods of abstinence (IPPF, 148).


One such technique is the basal body temperature method; this is where females record their temperature immediately after waking each morning. Throughout the early phase of the cycle just following menstruation the temperature will be low. Ovulation is indicated by an increase of 0.2-0.4C rise in temperature, the female then abstains from intercourse for 3 consecutive days of high temperature (IPPF, p149).


Another technique practiced is the cervical mucus method; this involves monitoring the vaginal and cervical mucus. At ovulation when oestrogen levels are raised the mucus is thick, sticky and opaque looking, women must abstain from intercourse until their mucus returns to a thin, clear and slippery consistency (IPPF, p151).


Family planning methods have allowed women that are not prohibited by culture to use barrier or oral contraceptives to control their fertility and plan their families. Such methods require dedication to be effective as they require long periods without intercourse. They have impacted society by decreasing the average sizes of families.


Barrier methods of contraception such as condoms are a common form contraception. They are widely available at a low cost throughout the world; this has resulted in their wide use amongst males and females. When used correctly the latex rubber condoms are effective at preventing pregnancy and sexually transmitted infections. Condom use has an effectiveness rate of around 95% with pregnancies per 100 women varying between 2 and 15 (Everitt & Johnson, p256).


Diaphragms, cervical caps and spermicides are other forms of barriers that act to prevent the passage of sperm entering the female reproductive tract during intercourse (Everitt & Johnson, p258).


The development of the female contraceptive pill has allowed the suppression of ovulation through a combination of oestrogen and progesterone or progesterone only doses. This has a high effectiveness when taken correctly and is economical at a cost of around $5 a month (Everitt & Johnson, p259). Pill use is associated with an effectiveness rate of around 98% with pregnancies per 100 women varying between 1 and 3 (Everitt & Johnson, p256). There is also a combination of three pills that can be taken up to 72hrs after unprotected intercourse that prevent fertilisation as a result of their high levels of oestrogen and progesterone.


Another contraceptive that is available for women is the intrauterine device. This is made of copper and is inserted in the uterus to produce a uterine environment that does not allow the sperm to transport through and prevents fertilization. (Everitt & Johnson, p263). This form of contraception is considered to be as effective as the combined oral pill (Everitt & Johnson, p263).


Another modern contraceptive measure is the Implanon implant, this works to prevent pregnancy for a period of three years. It is a small plastic rod that is implanted under the skin of the upper arm. The rod releases slowly a low dose of progesterone into the bloodstream. When inserted by a doctor Implanon is highly effective preventing pregnancy in over 99% of cases (FPWA, 2005).


Abortion
This can be a legitimate choice for couples or women that are faced with pregnancy that could result in abnormal outcomes and that could harm the mother. We view this as a reproductive technology and method of contraception as it is an approach to fertility control that all communities use. The procedure usually occurs during the first trimester of pregnancy using either the dilation of the cervix by metal sounds or scraping of the conceptus with a curette or vacuum aspiration. However, the procedure does entail the risk of future infertility as the result of infections that could arise (Everitt & Johnson, p264).


This use of this procedure is highly controversial, and often sparks debate which is emotionally charged. Debate surrounds the concept of human rights, with one argument insisting it is the mother’s right to choose, and another who argue for the child’s right to survive.


In China, women and their families are pressured by population-control officials who are criticized for having no regard for human rights. Unmarried couples in China are not considered ‘allowed’ to have children.
Nine studies were carried out in seven urban areas and two rural areas to uncover information about the sexual activity and contraceptive use amongst these populations. It was found that there is an unmet need for temporary methods of contraception in the urban areas of China (Garner et al 2004). Unmarried women had typically experienced sexual activity and up to one-third in some areas had had a previous pregnancy. A striking majority of those women who had become pregnant had an induced abortion. Inducted abortions occurred in 86% to 96% of women across the regions (Garner et al 2004).
Abortion clinics in Beijing, Changsha, and Dalian were surveyed from January to September in 2002 using self-administered questionnaires to determine the rates of repeated abortion and contraceptive use among unmarried young women seeking abortion in China (Cheng et al 2004). Over this time 4547 unmarried women came to the clinics seeking an abortion. Of these women, 33% reported having had one previous induced abortion. Of those who had had more than one abortion only one-third used contraception at their first sexual intercourse following the procedure. Of the 446 women who did use contraception 41.3% used the withdrawal or rhythm methods. Condom use was characteristic of 65% of the sample, although only 9.6% did so correctly and as a consistent contraceptive choice. Of the pregnancies 47.7% were the result of not using contraception and the remaining 52.3% were related to contraceptive failure (Cheng et al 2004). Similar studies have found that failure of contraceptive methods and unprotected intercourse greatly contributes to the high incidence of abortions (Xiao and Zhao 1997).


Sex ratios in China
Current practice of family planning in China is based on the population policy and strategy of the country (Xiao and Zhao 1997). Historically there has been a tendency to actively shown a preference for sons in China and a subsequent sex ratio inequality has resulted. In the Yunnan Province in China abortion patterns and reported sex ratios at birth of a random sample of 1,336 women aged 15 to 64 were analysed for a 20 year period from 1980 to 2000 in relation to parity sex of previous children (Johansson et al 2004). There was a male bias in the abortion pattern during the 1980s, but by the end of the 1990s most pregnancies of women with two children were being terminated. In this time the sex ratio at birth increased from 107 males to 100 females from 1984 to 1987 to 110 males to every hundred females across 1988 to the year 2000 (Johansson et al 2004). Many women’s reproductive choices were influenced by son preference in accordance with the particular family planning policies in place. Assumptions that discrimination against girls would reduce as economic development progressed and the increasing rates of educated females.


In accordance with China’s official news agency 119 boys are born for every 100 girls in China, elsewhere in the world the ratio is still in favour of boys but to the ratios are more equitable, that is 103 to 107 boys to every 100 girls (McElroy 2004). The unevenness of the sex bias leads to an excess of males and a deficit of available female partners.


Social engineering has resulted in the continuation of induced abortions as a result of the one child policy imposed in the 1980s to control the population growth. If people can choose the sex of their one and only child then they often prefer males for various economic and social reasons. The latest Chinese census shows that the rural provinces of Hainan and Guangdong have sex ratios at birth of 135.6 and 130.3 boys to 100 girls respectively (McElroy 2004). Every time an abortion is performed to be rid of a girl in favour of a boy, the ratio becomes increasingly biased towards males. This is a serious impacting feature of this form of contraception.


Conclusion
Serour, G (1996) stated:


Though reproductive choice is basically a personal decision, it is not totally so. This is because reproduction is a process which involves not only the person who makes the choice, but it also involves the other partner, the family, society and the world at large. It is therefore not surprising that reproductive choice is affected by the diverse contexts, sexual morals, cultures and religions, as well as the official stance of different societies.


When considering these issues it is important to remember the reality of the abilities and limitations of these technologies. Although their have been sizable developments in the field, those who successfully utilize these services currently represent a minority of the population.


There is general agreement however that there will be considerable future development in this discipline; that will encompass both foreseen and unforeseen implications for society. Nonetheless, the impact and extent of these implications remains under deliberation.
Immortality and the Science of Transhumanism
While many transhumanist theorists and advocates[who?] seek to apply reason, science and technology for the purposes of reducing poverty, disease, disability and malnutrition around the globe,[32] transhumanism is distinctive in its particular focus on the applications of technologies to the improvement of human bodies at the individual level. While many people[who?] believe that all transhumanists are striving for immortality, it is not necessarily true. Hank Pellissier, managing director of the Institute for Ethics and Emerging Technologies (2011-2012), surveyed transhumanists. He found that, of the 818 respondents, 23.8% did not want immortality.[63] Some of the reasons argued were boredom, Earth’s overpopulation and the desire "to go to an afterlife".[while several controversial new religious movements from the late 20th century have explicitly embraced transhumanist goals of transforming the human condition by applying technology to the alteration of the mind and body, such as Raëlism.[72]However, most thinkers associated with the transhumanist movement focus on the practical goals of using technology to help achieve longer and healthier lives, while speculating that future understanding of neurotheology and the application of neurotechnology will enable humans to gain greater control of altered states of consciousness, which were commonly interpreted as spiritual experiences, and thus achieve more profound self-knowledge.[73] Transhumanist Buddhists have sought to explore areas of agreement between various types of Buddhism and Buddhist-derived meditation and mind expanding "neurotechnologies".[74] "Cyborg Buddhists" have been criticised[75] for appropriating mindfulness as a tool for transcending humanness. While some transhumanists[who?] take an abstract and theoretical approach to the perceived benefits of emerging technologies, others have offered specific proposals for modifications to the human body, including heritable ones. Transhumanists are often concerned with methods of enhancing the human nervous system. Though some[who?]propose modification of the peripheral nervous system, the brain is considered the common denominator of personhood and is thus a primary focus of transhumanist ambitions.[87]
As proponents of self-improvement and body modification, including gender transitioning, transhumanists tend to use existing technologies and techniques that supposedly improve cognitive and physical performance, while engaging in routines and lifestyles designed to improve health and longevity.[88] Depending on their age, some[who?]transhumanists express concern that they will not live to reap the benefits of future technologies. However, many have a great interest in life extension strategies and in funding research in cryonics in order to make the latter a viable option of last resort, rather than remaining an unproven method.[89] Regional and global transhumanist networks and communities with a range of objectives exist to provide support and forums for discussion and collaborative projects.[
Transhumanists believe that "we are morally obligated to help the human race transcend its biological limits".[130] In fact, they go so far as to call "bioluddites" those who are opposed to them.[130] Though the gamut of transhumanist opinions ranges from those who believe that we will eventually be cyborgs to those who simply want their brains frozen in the hopes of being resuscitated in the future, all have considered the question of the human identity and whether or not it will be compromised. While the concept of being able to do away with negative emotions is appealing in theory, there are possible negative implications. For example, Fukuyama points out that, if we did not have the emotion of aggression, "we wouldn’t be able to defend ourselves".[130] These would not only affect our humanity, but also our interactions with others.
The Lazarus Effect: Pushing the Boundaries of resuscitation
The Lazarus Effect (1983) is the third science fiction novel set in the Destination: Void universe by the American author Frank Herbert and poet Bill Ransom. It takes place some time after the events in The Jesus Incident.
Plot summary[edit]
The Lazarus Effect continues the story of the planet Pandora that began in The Jesus Incident. The sentient kelp is almost extinct, Ship is gone, there is no more dry land, the majority of humanity is heavily mutated from the genetic experiments performed by Jesus Lewis, and a power-hungry mad man is attempting to control the planet. But the kelp is returning and this time Avata does not remain passive while people refuse to Worship.
Major themes[edit]
The book deals with concepts such as artificial intelligenceworship and the inherent problems of totalitarianism(Totalitarianism is a political system in which the state holds total authority over the society and seeks to control all aspects of public and private life wherever possible.). It also addresses the issues of clones(In biology, cloning is the process of producing similar populations of genetically identical individuals that occurs in nature when organisms such as bacteria, insects or plants reproduce asexually. Cloning in biotechnology refers to processes used to create copies of DNAfragments (molecular cloning), cells (cell cloning), or organisms. The term also refers to the production of multiple copies of a product such as digital media or software.
The term clone is derived from the Ancient Greek word κλών klōn, "twig", referring to the process whereby a new plant can be created from a twig. In horticulture, the spelling clon was used until the twentieth century; the final e came into use to indicate the vowel is a "long o" instead of a "short o".[1][2] Since the term entered the popular lexicon in a more general context, the spelling clone has been used exclusively.
), genetic engineering(Genetic engineering, also called genetic modification, is the direct manipulation of an organism's genome using biotechnology. New DNA may be inserted in the host genome by first isolating and copying the genetic material of interest using molecular cloningmethods to generate a DNA sequence, or by synthesizing the DNA, and then inserting this construct into the host organism. Genesmay be removed, or "knocked out", using a nuclease. Gene targeting is a different technique that uses homologous recombination to change an endogenous gene, and can be used to delete a gene, remove exons, add a gene, or introduce point mutations.
An organism that is generated through genetic engineering is considered to be a genetically modified organism (GMO). The first GMOs were bacteria generated in 1973 and GM mice in 1974. Insulin-producing bacteria were commercialized in 1982 andgenetically modified food has been sold since 1994. Glofish, the first GMO designed as a pet, was first sold in the United States December in 2003.[1]
Genetic engineering techniques have been applied in numerous fields including research, agriculture, industrial biotechnology, and medicine. Enzymes used in laundry detergent and medicines such as insulin and human growth hormone are now manufactured in GM cells, experimental GM cell lines and GM animals such as mice or zebrafish are being used for research purposes, andgenetically modified crops have been commercialized.
) and racism.
Cloning
First moves
Hans Spemann, a German embryologist was awarded a Nobel Prize in Physiology or Medicine in 1935 for his discovery of the effect now known as embryonic induction, exercised by various parts of the embryo, that directs the development of groups of cells into particular tissues and organs. In 1928 he and his student, Hilde Mangold, were the first to perform somatic-cell nuclear transfer using amphibian embryos – one of the first moves towards cloning.[16]
Methods
Reproductive cloning generally uses "somatic cell nuclear transfer" (SCNT) to create animals that are genetically identical. This process entails the transfer of a nucleus from a donor adult cell (somatic cell) to an egg from which the nucleus has been removed, or to a cell from a blastocyst from which the nucleus has been removed.[17] If the egg begins to divide normally it is transferred into the uterus of the surrogate mother. Such clones are not strictly identical since the somatic cells may contain mutations in their nuclear DNA. Additionally, the mitochondria in the cytoplasm also contains DNA and during SCNT this mitochondrial DNA is wholly from the cytoplasmic donor's egg, thus the mitochondrialgenome is not the same as that of the nucleus donor cell from which it was produced. This may have important implications for cross-species nuclear transfer in which nuclear-mitochondrial incompatibilities may lead to death.
Artificial embryo splitting or embryo twinning, a technique that creates monozygotic twins from a single embryo, is not considered in the same fashion as other methods of cloning. During that procedure, an donor embryo is split in two distinct embryos, that can then be transferred via embryo transfer. It is optimally performed at the 6- to 8-cell stage, where it can be used as an expansion of IVF to increase the number of available embryos.[18] If both embryos are successful, it gives rise to monozygotic (identical) twins.
Dolly the sheep
Main article: Dolly the sheep
Dolly clone
Dolly, a Finn-Dorset ewe, was the first mammal to have been successfully cloned from an adult cell. Dolly was formed by taking a cell from the udder of her biological mother. Her biological mother was 6 years old when the cells were taken from her udder.[19] Dolly's embryo was created by taking the cell and inserting it into a sheep ovum. It took 434 attempts before an embryo was successful.[20] The embryo was then placed inside a female sheep that went through a normal pregnancy.[21] She was cloned at the Roslin Institute in Scotland and lived there from her birth in 1996 until her death in 2003 when she was six. She was born on 5 July 1996 but not announced to the world until 22 February 1997.[20] Her stuffed remains were placed at Edinburgh's Royal Museum, part of the National Museums of Scotland.[22]
Dolly was publicly significant because the effort showed that genetic material from a specific adult cell, programmed to express only a distinct subset of its genes, can be reprogrammed to grow an entirely new organism. Before this demonstration, it had been shown by John Gurdon that nuclei from differentiated cells could give rise to an entire organism after transplantation into an enucleated egg.[23]However, this concept was not yet demonstrated in a mammalian system.
The first mammalian cloning (resuliting in Dolly the sheep) had a success rate per 277 fertilized eggs of 29 embryos, which produced three lambs at birth, one of which lived. For a bovine experiment involving seventy cloned calves, one third of them died young. For horses, Prometea took 814 attempts. Notably, although the first clones were frogs, no adult cloned frog has yet been produced from a somatic adult nucleus donor cell.
There were early claims that Dolly the Sheep had pathologies resembling accelerated aging. Scientists speculated that Dolly's death in 2003 was related to the shortening of telomeres, DNA-protein complexes that protect the end of linear chromosomes. However, other researchers, including Ian Wilmut who led the team that successfully cloned Dolly, argue that Dolly's early death due to respiratory infection was unrelated to deficiencies with the cloning process. This idea that the nuclei have not irreversibly aged was shown in 2013 to be true for mice.[24]
Dolly was named after performer Dolly Parton because the cells cloned to make her were from a mammary gland cell, and Parton is known for her ample cleavage.


Denying the Limits of Society
The early Legacy of Printing

The Invention of the Printing Press

Most of us tend to take printed materials for granted, but imagine life today if the printing press had never been invented. We would not have books, magazines or newspapers. Posters, flyers, pamphlets and mailers would not exist. The printing press allows us to share large amounts of information quickly and in huge numbers. In fact, it is so important that it has come to be known as one of the most important inventions of our time. It drastically changed the way society evolved. In this article, we will explore how the printing press came about, as well as how it affected culture.
Life before the printing press
Before the printing press was invented, any writings and drawings had to be completed painstakingly by hand. It wasn’t just anyone who was allowed to do this. Such work was usually reserved for scribes who lived and worked in monasteries. The monasteries had a special room called a “scriptorium.” There, the scribe would work in silence, first measuring and outlining the page layouts and then carefully copying the text from another book. Later the illuminator would take over to add designs and embellishments to the pages. In the Dark Ages and Middle Ages, books were usually only owned by monasteries, educational institutions or extremely rich people. Most books were religious in nature. In some cases, a family might be lucky enough to own a book, in which case it would be a copy of the Bible.
Inspiration and invention of the printing press
Around the late 1430s, a German man named Johann Gutenberg was quite desperate to find a way to make money. At the time, there was a trend in attaching small mirrors to one’s hat or clothes in order to soak up healing powers when visiting holy places or icons. The mirrors themselves were not significant, but Gutenberg quietly noted how lucrative it was to create mass amounts of a cheap product. During the 1300s to 1400s, people had developed a very basic form of printing. It involved letters or images cut on blocks of wood. The block would be dipped in ink and then stamped onto paper. Gutenberg already had previous experience working at a mint, and he realized that if he could use cut blocks within a machine, he could make the printing process a lot faster. Even better, he would be able to reproduce texts in great numbers. However, instead of using wood blocks, he used metal instead. This was known as a “movable type machine,” since the metal block letters could be moved around to create new words and sentences. With this machine, Gutenberg made the very first printed book, which was naturally a reproduction of the Bible. Today the Gutenberg Bible is an incredibly valuable, treasured item for its historical legacy.
How the printing press works
With the original printing press, a frame is used to set groups of type blocks. Together, these blocks make words and sentences; however, they are all in reverse. The blocks are all inked and then a sheet of paper is laid on the blocks. All of this passes through a roller to ensure that the ink is transferred to the paper. Finally when the paper is lifted, the reader can see the inked letters that now appear normally as a result of the reversed blocks. These printing presses were operated by hand. Later towards the 19th century, other inventors created steam-powered printing presses that did not require a hand operator. In comparison, today’s printing presses are electronic and automated, and can print far faster than ever before!
Impact of the printing press
Gutenberg’s invention made a dramatic impact when it reached the public. At first, the noble classes looked down on it. To them, hand-inked books were a sign of luxury and grandeur, and it was no match for the cheaper, mass-produced books. Thus, printed materials were at first more popular with the lower classes. When word spread about the printing press, other print shops opened and soon it developed into an entirely new trade. Printed texts became a new way to spread information to vast audiences quickly and cheaply. Academics benefited from this dissemination of scholarly ideas and even politicians found that they could garner the public’s interest through printed pamphlets. An important side effect was that people could read and increase their knowledge more easily now, whereas in the past it was common for people to be quite uneducated. This increased the discussion and development of new ideas. Another significant effect was that the printing press was largely responsible for Latin’s decline as other regional languages became the norm in locally printed materials.


Household Technologies and Women Liberation
The advent of modern appliances such as washing machines and refrigerators had a profound impact on 20th Century society, according to a new Université de Montréal study. Plug-in conveniences transformed women's lives and enabled them to enter the workforce, says Professor Emanuela Cardia, from the Department of Economics.
Within a short time-span, household technology became accessible to the majority. In the late 1910s, a refrigerator sold for $1,600 and 26 years later such appliances could be purchased for $170. Access to electric stoves, washing machines and vacuum cleaners was also generalized.
"These innovations changed the lives of women," says Professor Cardia. "Although it wasn't a revolution per se, the arrival of this technology in households had an important impact on the workforce and the economy."
Professor Cardia based her research on more than 3,000 censuses conducted between 1940 and 1950, from thousands of American households, across urban and rural areas. "We calculated that women who loaded their stove with coal saved 30 minutes everyday with an electric stove," says Cardia. "The result is that women flooded the workforce. In 1900, five percent of married women had jobs. In 1980, that number jumped to 51 percent."
In 1913, the vacuum cleaner became available, in 1916 it was the washing machine, in 1918 it was the refrigerator, in 1947 the freezer, and in 1973 the microwave was on the market. All of these technologies had an impact on home life, but none had a stronger impact than running water.
"We often forget that running water is a century-old innovation in North America, and it is even more recent in Europe. Of all innovations, it's the one with the most important impact," says Cardia.
In 1890, 25 percent of American households had running water and eight percent had electricity. In 1950, 83 percent had running water and 94% had electricity. According to Cardia, in 1900, a woman spent 58 hours per week on household chores. In 1975, it was 18 hours.
While there have been several studies on the industrial revolution and different aspects of technology, says Cardia, very few investigations have focused on the household revolution. "Yet, women play a very important role in the economy whether they hold a job or work at home."
 The study is entitled, "Household Technology: Was it the Engine of Liberation?"
The Impact of Train and Automobile
Over the course of the 20th century, the car rapidly developed from an expensive toy for the rich into the de facto standard for passenger transport in most developed countries.[1][2] In developing countries, the effects of the car have lagged, but are emulating the impacts of developed nations. The development of the car built upon the transport revolution started by railways, and like the railways, introduced sweeping changes in employment patterns, social interactions, infrastructure and goods distribution.
The effects of the car on everyday life have been a subject of controversy. While the introduction of the mass-produced car represented a revolution in mobility and convenience, the modern consequences of heavy automotive use contribute to the use ofnon-renewable fuels, a dramatic increase in the rate of accidental death, social isolation, the disconnection of community, the rise in obesity, the generation of air and noise pollution, urban sprawl, and urban decay.
Worldwide, the car has allowed easier access to remote places. However, average journey times to regularly visited places have increased in large cities, especially in Latin America, as a result of widespread car adoption. This is due to traffic congestion and the increased distances between home and work brought about by urban sprawl.[5]
Examples of car access issues in underdeveloped countries are:
  • Paving of Mexican Federal Highway 1 through Baja California, completing the connection of Cabo San Lucas to California, and convenient access to the outside world for villagers along the route. (occurred in the 1950s)
  • In Madagascar, approximately 30 percent of the population does not have access to reliable all weather roads.[6]
  • In China, 184 towns and 54,000 villages have no motor road (or roads at all)[7]
  • The origin of HIV explosion has been hypothesized by CDC researchers to derive in part from more intensive social interactions afforded by new road networks in Central Africa allowing more frequent travel from villages to cities and higher density development of many African cities in the period 1950 to 1980.[8]
Certain developments in retail are partially due to car use:
External costs[edit]
According to the Handbook on estimation of external costs in the transport sector[9] made by the Delft University and which is the main reference in European Union for assessing the external costs of cars, the main external costs of driving a car are congestion and scarcitycosts, accident costs, air pollution costs, noise costs, climate change costs, costs for nature and landscape, costs for soil and water pollution and costs of energy dependency.
Use of cars for transportation creates barriers by reducing the landscape required for walking and cycling. It may look like a minor problem initially but in the long run, it poses a threat to children and the elderly. Transport is a major land use, leaving less of this resource for other purposes.
Cars also contribute to pollution of air and water. Though a horse produces more waste, cars are cheaper, thus far more numerous in urban areas than horses ever were. Emissions of harmful gases like carbon monoxide, ozone, carbon dioxide, benzene and particulate matter can damage living organisms and the environment. The emissions from cars cause disabilities, respiratory diseases, and ozone depletionNoise pollution from cars can also potentially result in hearing disabilities, headaches, and stress to those frequently exposed to it.
The development of the car has contributed to changes in employment distribution, shopping patterns, social interactions, manufacturing priorities and city planning; increasing use of cars has reduced the roles of walkinghorses and railroads.[10]
In addition to money for roadway construction, car use was also encouraged in many places through new zoning laws that required that any new business construct a certain amount of parking based on the size and type of facility. The effect was to create many free parking spaces, and business places further back from the road.
Many new shopping centers and suburbs did not install sidewalks,[11] making pedestrian access dangerous. This had the effect of encouraging people to drive, even for short trips that might have been walkable, thus increasing and solidifying American auto-dependency.[12] As a result of this change, employment opportunities for people who were not wealthy enough to own a car and for people who could not drive, due to age or physical disabilities, became severely limited.
Prior to the appearance of the automobile, horses, walking and streetcars were the major modes of transportation within cities.[10] Horses require a large amount of care, and were therefore kept in public facilities that were usually far from residences. The wealthy could afford to keep horses for private use, hence the term carriage trade referred to elite patronage.[14] Horse manure left on the streets also created a sanitation problem.[15]
The automobile made regular medium-distance travel more convenient and affordable, especially in areas without railways. Because cars did not require rest, were faster than horse-drawn conveyances, and soon had a lower total cost of ownership, more people were routinely able to travel farther than in earlier times. The construction of highways half a century later continued this revolution in mobility. Some experts suggest that many of these changes began during the earlier Golden age of the bicycle, from 1880–1915.[16]

Changes to urban society[edit]

Beginning in the 1940s, most urban environments in the United States lost their streetcars, cable cars, and other forms of light rail, to be replaced by diesel-burning motor coaches or buses. Many of these have never returned, though some urban communities eventually installed subways.
Another change brought about by the car is that modern urban pedestrians must be more alert than their ancestors. In the past, a pedestrian had to worry about relatively slow-moving streetcars or other obstacles of travel. With the proliferation of the car, a pedestrian has to anticipate safety risks of automobiles traveling at high speeds because they can cause serious injuries to a human and can be fatal,[10] unlike in previous times when traffic deaths were usually due to horses escaping control.
According to many social scientists, the loss of pedestrian-scale villages has also disconnected communities. Many people in developed countries have less contact with their neighbors and rarely walk unless they place a high value on exercise.[17]

Advent of suburban society[edit]

Improved transport accelerated the outward growth of cities and the development of suburbs beyond an earlier era's streetcar suburbs.[10]Until the advent of the car, factory workers lived either close to the factory or in high density communities farther away, connected to the factory by streetcar or rail. The car and the federal subsidies for roads and suburban development that supported car culture allowed people to live in low density residential areas even farther from the city center and integrated city neighborhoods.[10] were Industrial suburbs being few, due in part to single use zoning, they created few local jobs and residents commuted longer distances to work each day as the suburbs continued to expand.[3]

Cars in Popular Culture[edit]

The car had a significant effect on the culture of the middle class. As other vehicles had been, cars were incorporated into artworks including music, books and movies. Between 1905 and 1908, more than 120 songs were written in which the automobile was the subject.[10] Although authors such as Booth Tarkington decried the automobile age in books including The Magnificent Ambersons (1918), novels celebrating the political effects of motorization included Free Air (1919) bySinclair Lewis, which followed in the tracks of earlier bicycle touring novels. Some early 20th century experts doubted the safety and suitability of allowing female automobilists.Dorothy Levitt was among those eager to lay such concerns to rest, so much so that a century later only one country had a women to drive movement. Where 19th century mass media had made heroes of Casey Jones, Allan Pinkerton and other stalwart protectors of public transport, new road movies offered heroes who found freedom and equality, rather than duty and hierarchy, on the open road.
George Monbiot writes that widespread car culture has shifted voter's preference to the right of the political spectrum.[18] He thinks that car culture has contributed to an increase in individualism and fewer social interactions between members of different socioeconomic classes.
As tourism became motorized, individuals, families and small groups were able to vacation in distant locations such as national parks. Roads including the Blue Ridge Parkway were built specifically to help the urban masses experience natural scenery previously seen only by a few. Cheap restaurants and motels appeared on favorite routes and provided wages for locals who were reluctant to join the trend torural depopulation.
Since the early days of the car, the American Motor League promoted the making of more and better cars, and the American Automobile Association joined the good roads movement begun during the earlier bicycle craze. When manufacturers and petroleum fuel suppliers were well established, they also joined construction contractors in lobbying governments to build public roads.[3]
Road building was sometimes also influenced by Keynesian-style political ideologies. In Europe, massive freeway building programs were initiated by a number of social democratic governments after World War II, in an attempt to create jobs and make the car available to the working classes. From the 1970s, promotion of the automobile increasingly became a trait of some conservatives.[citation needed] Margaret Thatcher mentioned a "great car economy" in the paper on Roads for Prosperity.
Further information: Automobile folklore

Cars as a hobby[edit]

Over time, the car has evolved beyond being a means of transportation or status symbol and into a subject of interest and a cherished hobby amongst many people in the world, who appreciate cars for their craftsmanship, their performance, as well as the vast arrays of activities one can take part in with his/her car.[19] People who have a keen interest in cars and/or participate in the car hobby are known as "Car Enthusiasts", with those building their own custom vehicles, primarily appearance-based on original examples or reproductions of pre-1948 US car market designs and similar designs from the World War II era and earlier from elsewhere in the world, often known as hot rodders.
One major aspect of the hobby is collecting, cars, especially classic vehicles, are appreciated by their owners as having aesthetic, recreational and historic value.[20] Such demand generates investment potential and allows some cars to command extraordinarily high prices and become financial instruments in their own right.
Another major aspect of the hobby is driving events, where enthusiasts from around the world gather to drive and race their cars. Notable examples such events are the annualMille Miglia classic car rally and the Gumball 3000 supercar race.
Many car clubs have been set up to facilitate social interactions and companionships amongst those who take pride in owning, maintaining, driving and showing their cars. Many prestigious social events around the world today are centered around the hobby, a notable example is the Pebble Beach Concours d'Elegance classic car show.

Safety[edit]

Motor vehicle accidents account for 37.5% of accidental deaths in the United States, making them the country's leading cause of accidental death.[21] Though travelers in cars suffer fewer deaths per journey, or per unit time or distance, than most other users of private transport such as bicyclers or pedestrians[citation needed], cars are also more used, making automobile safety an important topic of study. For those aged 5–34 in the United States, motor vehicle crashes are the leading cause of death, claiming the lives of 18,266 Americans each year.[22]

Costs[edit]

In countries such as the United States the infrastructure that makes car use possible, such as highways, roads and parking lots is funded by the government and supported through zoning and construction requirements.[23] Fuel taxes in the United States cover about 60% of highway construction and repair costs, but little of the cost to construct or repair local roads.[24][25] Payments by motor-vehicle users fall short of government expenditures tied to motor-vehicle use by 20–70 cents per gallon of gas.[26] Zoning laws in many areas require that large, free parking lots accompany any new buildings. Municipal parking lots are often free or do not charge a market rate. Hence, the cost of driving a car in the US is subsidized, supported by businesses and the government who cover the cost of roads and parking.[23]
This government support of the automobile through subsidies for infrastructure, the cost of highway patrol enforcement, recovering stolen cars, and many other factors makes public transport a less economically competitive choice for commuters when considering Out-of-pocket expenses. Consumers often make choices based on those costs, and underestimate the indirect costs of car ownership, insurance and maintenance.[24] However, globally and in some US cities, tolls and parking fees partially offset these heavy subsidies for driving. Transportation planning policy advocates often support tolls, increased fuel taxes, congestion pricing and market-rate pricing for municipal parking as a means of balancing car use in urban centers with more efficient modes such as buses and trains.
When cities charge market rates for parking, and when bridges and tunnels are tolled, driving becomes less competitive in terms of out-of-pocket costs. When municipal parking is underpriced and roads are not tolled, most of the cost of vehicle usage is paid for by general government revenue, a subsidy for motor vehicle use. The size of this subsidy dwarfs the federal, state, and local subsidies for the maintenance of infrastructure and discounted fares for public transportation.[24]
By contrast, although there are environmental and social costs for rail, there is a very small impact.[24]
In the United States, out of pocket expenses for car ownership can vary considerably based on the state in which you live. In 2013, annual car ownership costs including repair, insurance, gas and taxes were highest in Georgia ($4,233) and lowest in Oregon ($2,024) with a national average of $3,201.[27]
Data provided by the AAA indicates cost of ownership in the United States is rising about 2% per year.[28]


Social Media and Social Upheaval
Social movements (or social upheavals) are a type of group action. They are large, sometimes informal, groupings of individuals or organizations which focus on specific political or social issues. In other words, they carry out, resist or undo a social change.
Modern Western social movements became possible through education (the wider dissemination of literature), and increased mobility of labor due to the industrialization and urbanization of 19th century societies.[1] It is sometimes argued that the freedom of expression, education and relative economic independence prevalent in the modern Western culture are responsible for the unprecedented number and scope of various contemporary social movements. However, others point out that many of the social movements of the last hundred years grew up, like the Mau Mau in Kenya, to oppose Western colonialism. Either way, social movements have been and continued to be closely connected with democratic political systems. Occasionally, social movements have been involved in democratizing nations, but more often they have flourished after democratization. Over the past 200 years, they have become part of a popular and global expression of dissent.[2]
Modern movements often utilize technology and the internet to mobilize people globally. Adapting to communication trends is a common theme among successful movements. Research is beginning to explore how advocacy organizations linked to social movements in the U.S.[3] and Canada[4] use social media to facilitate civic engagement and collective action.
Political science and sociology have developed a variety of theories and empirical research on social movements. For example, some research in political science highlights the relation between popular movements and the formation of new political parties as well as discussing the function of social movements in relation to agenda setting and influence on politics. For more than ten years, social movement groups have been using the Internet to accomplish organizational goals. It has been argued that the Internet helps to increase the speed, reach and effectiveness of social movement-related communication as well as mobilization efforts, and as a result, it has been suggested that the Internet has had a positive impact on the social movements in general.[4][37][38][39]
Many discussions have been generated recently on the topic of social networking and the effect it may play on the formation and mobilization of social movement.[40] For example, the emergence of the Coffee Party first appeared on the social networking site, Facebook. The party has continued to gather membership and support through that site and file sharing sites, such as Flickr. The 2009–2010 Iranian election protests also demonstrated how social networking sites are making the mobilization of large numbers of people quicker and easier. Iranians were able to organize and speak out against the election of Mahmoud Ahmadinejad by using sites such as Twitter and Facebook. This in turn prompted widespread government censorship of the web and social networking sites.
The sociological study of social movements is quite new. The traditional view of movements often perceived them as chaotic and disorganized, treating activism as a threat to thesocial order. The activism experienced in the 1960s and 1970s shuffled in a new world opinion about the subject. Models were now introduced to understand the organizational and structural powers embedded in social movements.



A key point of the Internet and the World Wide Web for many has been the view that they provide arenas that are free from authoritative control, havens of free speech and freedom of expression. In the past few years, the role that social media in particular have played in various struggles for democracy in certain countries not only indicates the power of social media in civil struggles, but has led to them collectively being described as ‘liberation’. Liberation Technology, however, offers a range of perspectives on the use of Internet and social media during periods of social upheaval in countries around the world.The view that the Internet and digital technologies is unregulated and argue that in fact there are more boundaries in the online world than are realised. Some of these boundaries are socially constructed, because people may join Internet groups which focus on their own political and social beliefs. By doing so, they may restrict themselves to communicating only with other people who share these belief systems and wall themselves off from other people who hold opposing views. In other countries governed by oppressive regimes, Internet users may find their ability to access certain search engines and Internet sites restricted by arrangements between the international conglomerates which own the digital platforms and the country’s government. Liberation Technology offers a balance of views; those which illustrate the empowerment offered by the Internet and digital technologies to citizens ruled by oppressive regimes and in contrast, the empowerment of such regimes to censor Internet sites, to monitor and take action against those of their own citizens whose digital communications are seen as subversive.
Therefore, Social upheavals take place because of social media which is a liberating technology.
The Internet and The open Education Movement
OpenCourseWare (OCW) are course lessons created at universities and published for free via the Internet. OCW projects first appeared in the late 1990s, and after gaining traction in Europe and then the United States have become a worldwide means of delivering educational content.

History[edit]

The OpenCourseWare movement started in 1999 when the University of Tübingen in Germany published videos of lectures online for its timms initiative (Tübinger Internet Multimedia Server).[1] The OCW movement only took off, however, with the launch of MIT OpenCourseWare at the Massachusetts Institute of Technology (MIT) and the Open Learning Initiative at Carnegie Mellon University[2] in October 2002. The movement was soon reinforced by the launch of similar projects at Yale, the University of Michigan, and the University of California Berkeley.
MIT's reasoning behind OCW was to "enhance human learning worldwide by the availability of a web of knowledge".[3] MIT also stated that it would allow students (including, but not limited to its own) to become better prepared for classes so that they may be more engaged during a class. Since then, a number of universities have created OCW, some of which have been funded by the William and Flora Hewlett Foundation.[3]

Principles[edit]

According to the website of the OCW Consortium, an OCW project:
  • is a free and open digital publication of high quality educational materials, organized as courses.
  • is available for use and adaptation under an open license, such as certain Creative Commons licenses.
  • does not typically provide certification or access to faculty.[4]

edX[edit]

Main article: edX
Ten years after the US debut of OCW, in 2012 MIT and Harvard University announced the formation of edX, a massive open online course (MOOC) platform to offer online university-level courses in a wide range of disciplines to a worldwide audience at no charge. This new initiative was based on MIT's "MITx" project, announced in 2011, and extends the concepts of OCW by offering more structured formal courses to online students, including in some cases the possibility of earning academic credit or certificates based on supervised examinations. A major new feature of the edX platform is the ability for students to interact with each other and with teachers in online forums. In some cases, students will help evaluate each other's work, and may even participate in some of the teaching online.
In addition, edX is being used as an experimental research platform to support and evaluate a variety of other new concepts in online learning.

Problems[edit]

A problem is that the creation and maintenance of comprehensive OCW requires substantial initial and ongoing investments of human labor. Effective translation into other languages and cultural contexts requires even more investment by knowledgeable personnel. This is one of the reasons why English is still the dominant language, and fewer open courseware options are available in other languages.[5] The OCW platform SlideWiki[6] addresses these issues through a crowdsourcing approach.

Americas[edit]

Colombia[edit]

  • Universidad Icesi, OpenCourseWare de la Universidad Icesi[7]

Brazil[edit]

Mexico[edit]

United States[edit]

This listing is roughly in the order of adoption of OCW principles.
The following are not directly affiliated with a specific university:

Asia[edit]

China[edit]

OpenCourseWare originally initiated by MIT and the Hewlett Foundation, began movement in China in September, 2003, when MIT and the Internet Engineering Task Force(IETF) joined together with the Beijing Jiaotong University to organize an OpenCourseWare conference in Beijing. As a result of this conference, 12 universities petitioned the government to institute a program of OpenCourseWare in China. This group included both some of the most prestigious universities in China, as well as the Central Radio and Television University, which is China’s central open university, covering more than 2 million students.
As a result of this petition, the Chinese government approved to institute the CORE(China Open Resources for Education)[14] to promote the OpenCourseWare in Chinese Universities, with Fun-Den Wang (the head of IETF) as chairman. The CORE is an NGO supported by Hewlett Foundation, Internet Engineering Task Force (IETF) and other fundations. According to CORE's website, it has nearly 100 Chinese universities as members, including the most prestigious universities in China, such as Tsinghua University,Peking University and Shanghai Jiaotong University.[15] This organization organized volunteers to translate foreign OpenCourseWare, mainly MIT OpenCourseWare into Chinese and to promote the application of OpenCourseWare in Chinese universities. At February, 2008, 347 courses had been translated into Chinese and 245 of them were used by 200 professors in courses involving a total of 8,000 students. It also tried to translate some Chinese courses into English, but the number is not too much and some are only title translated.[16] There have also been produced 148 comparative studies comparing MIT curriculum with Chinese curriculum using the MIT OpenCourseWare material.[17] CORE's offices are hosted within the China Central Radio and Television University, and they receive partial funding from the IETF and the Hewlett foundation.[18] They also host annual conferences on open education, and the 2008 conference was co-located with the international OpenCourseWare Consortium conference, which brought a large amount of foreign participants.[19]
But before the OpenCourseWare conference in Beijing and the establishment of CORE, at April 8, 2003, the Ministry of Education had published a policy to launch the China Quality Course (精品课程) program.[20] This program accepts applications for university lecturers that wish to put their courses online, and gives grants of between$10,000 – 15,000 CAD per course that is put online, and made available free of charge to the general public (ibid.). The most prestigious award is for the “national level CQOCW”, then there is “provincial level” and “school level”. From 2003 to 2010, there have produced 3862 courses at the national level by 746 universities.[21] According to the official website for the China Quality Course, the total number of the courses available online is more than 20,000.[22] These typically include syllabus, course notes, overheads, assignments, and in many cases audio or video of the entire lectures.[18] The scale of this project has also spurred a large research activity, and over 3,000 journal articles have been written in Chinese about the topic of OpenCourseWare.[23]

Pakistan[edit]

The Virtual University (Urdu:ورچوئل یونیورسٹی; Vu), is a public university located in urban area of Lahore, Punjab, Pakistan. Its additional campus is also located in residential area of Karachi, Sindh, Pakistan.
Established in 2002 by the Government of Pakistan to promote distance education in modern information and communication sciences as its primary objectives, the university is noted for its online lectures and broadcasting rigorous programs regardless of their students' physical locations. The university offers undergraduate and post-graduate courses in business administration, economics, computer science, and information technology. Due to its heavy reliance on serving lectures through the internet, Pakistani students residing overseas in several other countries of the region are also enrolled in the University's programs.

India[edit]

The National Programme on Technology Enhanced Learning (NPTEL) is a Government of India sponsored collaborative educational programme. By developing curriculum-based video and web courses the programme aims to enhance the quality of engineering education in India. It is being jointly carried out by 7 IITs and IISc Bangalore, and is funded by the Ministry of Human Resources Development of the Government of India.
Flexilearn is a very useful open course portal. It was initiated by Indira Gandhi National Open University, and apart from providing free course materials, flexilearn also provides opportunities to enroll oneself for a course and appear for exam conducted by university and thereby get certification.
To provide open access to the resources for school education, Department of School Education and Literacy, Ministry of Human Resource Development, Government of India and the Central Institute of Educational Technology, National Council of Educational Research and Training launch National Repository of Open Educational Resources" (NROER) Anyone can participate in, contribute, curate and organise resources and activities, growing it to reach every teacher and every student in all languages. TARGET AUDIENCE
Teachers: The repository is primarily for teachers so that they can have access to variety of resources available in different subject areas. The idea is to introduce teachers to a bouquet of resources and provide them with an opportunity to pick and choose the resources which suits their requirement in classroom transaction. In addition to accessing resources and using them in their classroom, teacher can also create and contribute resources.
Teacher Educators: The repository aims to house various policy documents for example copy of National Curriculum Frameworks, National Focus Group papers on all the subjects and other policy documents which will be helpful for teacher educators.
Students/ Parents: Students can access variety of resources. They should be able to access the resources grade wise, subject wise and language wise.
Photographers: Photographs and images which can be mapped to school curriculum are invited. Photographers or any individual who has an access to such photographs/images can contribute to the repository making it relevant for school students and teachers by appropriately tagging them and by providing relevant keywords.
Producers: Documentary filmmakers/ audio producers/ video producers who have produced films/ video/ audio programmes can contribute to the repository. Also by having a look at spectrum of content which the repository plans to offer they can create content and contribute.
Other government and non-government organizations: As far as e-content is concerned, it is available in abundance. Many organisations have been creating such content for many years. The NROER aims to bring all such organisations on board so that the content created by all these organisations can be mapped to school curriculum and can be made available to teachers and students.

Japan[edit]

OpenCourseWare originally initiated by MIT and the Hewlett Foundation, was introduced and adopted in Japan.
In 2002, researchers from the National Institute of Multimedia Education (NIME) and Tokyo Institute of Technology (Tokyo Tech) studied the MIT OpenCourseWare, leading them to develop an OCW pilot plan with 50 courses at Tokyo Institute of Technology in September.[24] Later, in July 2004, MIT gave a lecture about MIT OpenCourseWare at Tokyo Tech that prompted the first meeting of the Japan OCW Alliance. The meeting was held with four Japanese universities that had mainly been recruited through the efforts of MIT professor Miyagawa, and his personal contacts. In one case, the connection was the former president of the University of Tokyo being an acquintance of Charles Vest, the former president of MIT.[25]
In 2006, the OCW International Conference was held at Kyoto University wherein the Japanese OCW Association was reorganized into the Japan OCW Consortium.[24] At that time, Japan OCW Consortium had over 600 courses; currently they have 18 university members, including the United Nations University (JOCW, n.d.). On Japanese university campuses there are few experts in content production, which makes it difficult to get support locally, and many of the universities have had to out-source their production of OCW. In example, the University of Tokyo has had to mainly employ students to create OCW.[24]
The motivation for joining the OCW movement seems to be to create positive change among Japanese universities, including modernizing presentation style among lecturers, as well as sharing learning material.[25] Japanese researchers have been particularly interested in the technical aspects of OCW, for example in creating semantic search engines. There is currently a growing interest for Open Educational Resources (OER) among Japanese universities, and more universities are expected to join the consortium.[26]
In order to become an integral institution that contributes to OER, the JOCW Consortium needs to forge solidarity among the member universities and build a rational for OER on its own, different from that of MIT, which would support the international deployment of Japanese universities and also Japanese style e-Learning.”[26]

Europe[edit]

Germany[edit]

France[edit]

  • France Universite Numerique: The Mooc portal for French Universities, founded in 2013 with state support.

Netherlands[edit]

Romania[edit]

Turkey[edit]

Middle East

In the United Arab Emirates, a discussion, led by Dr. Linzi j. Kemp, American University of Sharjah,[29] has begun about sharing teaching and learning materials (‘open course ware’) through a community of educators and practitioners in the GCC. There is growing availability of high quality and free open access materials shared between universities e.g. MIT (USA). We are also exposed to an example of resource sharing through ‘The Open University (UK), OpenLearn’ platform. Kemp (2013) proposes that teaching and learning will be enhanced when across institutions of higher education, we work together to bring our shared knowledge into classrooms. Furthermore, when we open up this platform to include practitioners e.g. Employers, then the relationship with industry will further ensure teaching and learning is available and beneficial for a wider community.


Mobile Technology And Democratic Development
The Internet, cell phones and related technologies are profoundly affecting social, economic and political institutions worldwide, particularly in new and emerging democracies. In the hands of reformers and activists, these tools can overcome resource disparities and entrenched monopolies of power and voice.
Examples abound of uses of the Internet in the democratic context, from promoting citizen advocacy to increasing government transparency and accountability. Citizens, civil and non-governmental organizations, companies, civil servants, politicians, and large state and private-sector bureaucracies are employing technologies and the Internet to enhance communication, improve access to important information, and increase their efficiency, resulting in strengthened democratic processes and more effective governance. Encouraging and improving the use of such technologies in democratic development has thus become an imperative spanning a broad range of programming areas for NDI.
Increasingly, in response to the needs and requests of our partners, NDI has implemented a diverse range of programs with critical information and communications technology (ICT) components, targeting democratic institutions and/or supporting democrats in general. Everywhere NDI works, democracy practitioners and activists are using new technologies to improve access to information across borders and issue areas, and to enhance their efficiency and effectiveness.Publications
About ICT Programming & Democracy
Information and communication technologies present benefits and challenges to democratic development. The Internet provides a voice for all people and groups - democratic and undemocratic. Undemocratic forces are employing powerful technologies with equal, if not greater, efficiency and scope, which further highlights the importance of empowering democrats and institutions in emerging democracies to use ICTs as a tool to enhance the information sharing, efficiency and transparency that are crucial to building and sustaining democracy.
Providing access to all citizens, particularly those in less developed socioeconomic areas in developing and developed nations, presents a related developmental challenge. Lack of access to technologies such as telephones, television, radio and others have frustrated development efforts for decades. This access is currently limited to a small segment of the world's population, and the technological divide between those with access and those without is significant and growing. At first glance this appears to pose a serious challenge to exploiting the potential of the Internet for democratic development and citizen participation in democratic governance. NDI's experience suggests, however, that pragmatic strategies for using the Internet and related technologies notwithstanding the technological divide are critical in beginning to narrow the gap and enhancing participation by those currently disconnected.
The primary factors that hinder access to Internet and related technologies for the global, and especially rural, populations are: 1) level of technology and infrastructure; 2) cost; 3) cultural, linguistic or other social barriers; and/or 4) low political will to address these issues. Yet there are thousands of important organizations, and millions of people, who do not necessarily face these issues and who reside in emerging democracies. In many countries these are civil servants, members of Parliament and parliamentary staff, NGO and civil-society organization staff and members, teachers and students, leadership and staff in various institutions inside and outside the governmental sphere, political party members and/or staff, employees in all spheres of the private sector, and more. These people almost certainly come from disconnected communities, but work or are involved with organizations that could and should be connected.
Many of these organizations are disconnected not because they lack telecommunications infrastructure or providers of equipment and training, nor because they lack a recognition of the importance of getting connected and communicating or sharing information. They remain disconnected because they lack either the moderate financial resources required, or the technical and managerial expertise to adequately plan for and procure the needed equipment, systems and services. These are areas where NDI provides assistance.
As a democracy practitioner, NDI’s developmental assistance must deal with the task at hand: providing useful support to enhance democratic development through Internet and related technologies, where appropriate, within those sectors of society where such support is currently practical. In doing so, we inevitably bridge people within these societies from one side of the divide to the other.
NDI has learned to apply both an in-depth knowledge of the democratic workings of its partners gained over time, and the technical and project management expertise needed to work with information technology (IT) vendors -- providing a crucial middle layer necessary for a successful IT initiative. In addition, NDI's success is linked to that of the project and the partner, with the ultimate goal being the development of sustainable systems using local staff, equipment and service providers which support the democratic process. NDI's ICT and democracy programs are designed to support democratic principles such as good governance, accountability, transparency, efficiency, communication and outreach. ICTs are used in a crosscutting manner, strengthening initiatives in governance, political parties, election processes, citizen participation, and gender programs.
Introduction to our ICT Programs
NDI has conducted successful ICT programs for over a decade in all regions of the world. Project Vote, an NDI voter education program supporting the 1994 South African local government elections, had an early and impactful ICT component. After the elections, NDI was the only organization in the country that could gather information on elected councilors from all 768 of South Africa's newly created municipalities (then called “transitional local authorities”), and that could compile that information in a database. The database was printed in a volume, widely distributed, and then handed over to the Department of Local Government and Housing and the South African Local Government Association for ongoing maintenance. Since that time, NDI has conducted a wide variety of programs with ICT components around the world.
Over time as technological changes have accelerated and Internet use has become more prevalent, more NDI partners and program managers have asked for ICT-related assistance. Donors have also become more interested in this type of democracy programming. NDI has participated and presented its work in several international forums, such as the UK Foreign and Commonwealth Office (FCO) Workshop on the Internet and Democracy Building in Wilton Park, U.K., in May 2001, and the International IDEA Democracy Forum in Stockholm. Since 2003, further collaboration opportunities on the World Bank's Development Gateway project, with Steven Clift of e-democracy.org, and with the National Academy of Engineering on Technology and Peacebuilding, has characterized NDI’s participation in a broad range of ICT initiatives. In 2007, NDI was recognized as one of the “Top 10 Who Are Changing the World of the Internet and Politics” by PoliticsOnline and the World E-Gov Forum.
As the field of technology and democracy becomes more visible, NDI will continue its leading work, having worked in emerging democracies for over 14 years and gained considerable experience. Throughout its experiences in ICT, NDI has also come to recognize and realize the potential that technology holds for democracy support, and continues to include technology components in its democratic development programs where beneficial, feasible and sustainable.
Strategies
Unique Relationships
NDI is optimally positioned to succeed with technology-and-democracy related programs not only because it can bridge the gap in developing IT systems with its partners, but also because of its existing field offices, relationships and contacts in dozens of countries.
An effective ICT project requires a strong relationship with a partner, with crucial support and buy-in from the partner’s senior leadership and an excellent understanding by NDI of the partner’s business processes and objectives. NDI has program staff around the world that have established these relationships, and who are involved in various forms of institutional support. Such relationships build awareness of core needs within a partner organization, positioning NDI as uniquely qualified to provide the assistance an organization needs.
NDI’s network of program staff members gives the institute a unique ability to assess its partners’ proposed technology systems, and to assist in developing estimates of initial and ongoing budgets for building such systems. NDI can then support proposed projects by sharing the skills needed to implement sustainable ICT programs.
Sustainability through Organizational Capacity Building
While in many aspects of NDI programming, value comes through sharing knowledge and experience among democratic leaders or documenting and sharing democracy-building experience, building sustainable ICT systems requires a slightly different strategy.
Technology programming typically involves building systems (websites, databases, communication networks, etc.) and thus requires organizational changes within our partner institutions in order to maintain these systems. These changes drive planning, assessment, implementation and program evaluation.
Sustainability means that development of an Internet or other IT system must happen in parallel with a process of building capacity within the partner organization to support and maintain the system. The partner must form the necessary relationships within its country to meet its ongoing needs for equipment, support and services. This approach may result in a higher initial investment, and requires a longer-term engagement (several months to several years) with the partner organization as it aligns its staffing and budgeting to meet the long-term commitment of supporting the systems. Over that longer term, however, this approach has proven effective at allowing NDI and its donors' funds to continue to bring value to its partners, and to support democratic development long after the Institute’s initial program has terminated.
Areas of Expertise
To strengthen the broadest range of democracy and governance programs, NDI strives to pioneer new applications of technology through inventive and inclusive techniques and apply them to the unique challenges within developing democracies. Program areas in emerging democratic countries with significant ICT components include: governance, elections and political processes, political parties, women’s empowerment and citizen participation.
Democratic Governance
NDI's governance work tends to emphasize the political dimension of democratic governance within four main practice areas: constitutional reform, legislative development, local government, and public integrity. NDI has assisted partners in developing legislative tracking systems and building websites for parliaments in countries throughout the world, including sub-Saharan Africa and the Middle East. This support has included assistance for parliaments and executive branch offices in technology planning and training, development of technical support units, provision of computers and networking resources to institutions, and building of voting and translation systems. In September 2005 NDI conducted its largest legislative technology program to date, providing substantial technology assistance to the Iraqi National Assembly to help it improve management of legislative information.
Macedonia Casework Tracking Database
NDI's work with the National Assembly and political parties in Macedonia on a constituency outreach program resulted in the establishment of 45 new constituency offices in 2004. The success of this program led to difficulties in managing the offices’ case loads, and made it evident that the offices needed to replace paper filing methods for tracking constituent casework with electronic tools.
In response, NDI assisted the Macedonian political parties in developing a casework tracking database in 2005, and deploying it to the 45 constituency offices. The database facilitates reporting and record keeping, and allows office assistants to enter cases they received over the phone, by letter, or by email. The database is trilingual; its operators can switch between Macedonian, Albanian and English.
Elections and Political Processes
The most prominent of NDI’s recent technology innovations have been in the elections area, where NDI has pioneered sophisticated uses of cell phones for domestic election observation. Combining an SMS (text message) based reporting system with NDI's rigorous observation methodology, NDI partners can enhance the integrity of elections by alerting authorities to problems early enough to allow remedies. The speed of SMS-based reporting also allows the Institute’s partners to publicize an assessment of the quality of polling and tabulation, exposing problematic elections and increasing public confidence in credible elections. Further technology programs in this field include building of databases and tracking software for international observation missions, and designing of data analysis software for domestic groups who monitor election irregularities and conduct parallel vote counts.
In Sierra Leone's most recent national election and runoff, in 2007, 500 election observers at polling stations around the country sent text messages through mobile phones to report on polling irregularities. Led by the National Election Watch (NEW), a coalition of over 200 domestic and international NGOs in the country, monitors used this rapid reporting system to help stabilize the political environment and support the peaceful transfer of power after a long civil war.
Political Party Development
NDI assists political parties around the world to improve various aspects of their work through employing technology. NDI has worked with global experts in online campaigning and advocacy to help political parties throughout the world take advantage of advances in online campaigning and member tracking. Online campaigning support often consists of assistance in the development of party websites -- including functions such as online polling and member registration and subscription lists -- as well as assistance with organizing, administrative functions and financial management. Party member tracking databases and internal communication strategies help parties to improve communication, organize activities and be more internally democratic.
The bilingual PPN web portal facilitates NDI political party training programs in Latin America. The site offers political parties across Latin America and the Caribbean access to comparative information, tools for party building, and techniques on political party reform; in the process, it strengthens the existing network of reform-minded leaders and provides opportunities for the exchange of ideas and expertise, while facilitating the administration and delivery of future NDI political party training programs. An online training feature enables NDI trainers to expand the reach of the Institute’s training program, transforming one to two weeks of face-to-face training to eight to ten months of online and in-person training and mentoring. The web site -- a combination of a resource portal, community-building portal, and interactive training-program management system -- primarily addresses NDI's two flagship programs in Latin America: PREPA, a party renewal program focused on training of trainers, and the Leadership Program, which encourages modernization and renewal by strengthening the skills of emerging leaders.
Women’s Participation
NDI’s most significant technological contribution to its women’s programs has been the online extension of partner networks. The Win with Women Global Initiative typifies this outreach by helping women around the world share resources, experiences and ideas, with the goal of overcoming barriers and challenges to women’s full participation in politics.
An innovative, multilingual global platform, the iKNOW Politics web site is designed to promote gender-sensitive governance and expand the role and participation of women in political and public life. iKNOW Politics connects parliamentarians, representatives, candidates, political party leaders and members, researchers, academics, and practitioners across borders, generations and faiths. The network equips them with the materials, expertise and best practices to make their political mark.
Citizen Participation
The Institute has assisted NGO partners and civic groups with development of databases and tracking software for members, trainers and volunteers, enabling NDI’s partners and groups to coordinate training activities, organize focus groups, distribute materials and generate statistical reports. NDI’s technological support in developing civil society has also included consulting on websites for public outreach, assisting with online discussion groups to help sustain networks of activists, and developing secure intranets that incorporate collaboration tools so groups can work together in confidence on policy or planning documents.
Improving Civic Data Collection in Angola
Through a coordinating body, the National Platform, NDI has assisted Angolan civic networks with standardization of trainings, forms and press statements around election and civic education programs since 2006. This assistance included technical guidance on development of new tools for data collection and reporting. The Platform used a computerized data collection system, with scannable observation forms, to record and sort observation information on the election registration process ahead of the 2008 national poll. The new technology uses scanners with “intelligent character recognition” software, which reads handwritten data on observation forms and uploads that information to a centralized computer database. The database processes scanned observer reports from around the country, reducing data collection time and the potential for human error.


Questions That were to be Answered
In this section I will take some of the additional case studies at the end and round up the salient points from the articles for you!
Cobweb’ a company providing liberating IT cloud computing services from UK
Should governments allow people to vote online? Well according to the article turnout in typical democratic countries is low, so it would help:
In 2012, 60 percent of eligible voters (129 million American citizens) headed to the polling booth, including the largest number of voters ever among African-Americans, Latinos, and Asian-Americans, and large numbers of women and young people—many of whom voted for the first time ever. But when 40 percent (86 million American citizen adults) are not voting, the simple fact is our society—and democracy writ large—suffers.”
The article goes on to question the methods used, and cost factors:
The fundamental problem is that the way we exercise our right to vote remains trapped in the 19th century. Some election officials still use unwieldy reams of paper to check off voters, voting machines vary from precinct to precinct and frequently break, and voters are driving to city hall or the public library to get their voter registration forms in many states.
What’s more, it’s costing Americans to participate in the process both in terms of the time and effort they must invest in order to register and vote—and in taxpayer dollars. In Oregon, where voter turnout is remarkably high in comparison with the rest of the nation, the state spends $4.11 to process each voter registration form. Meanwhile in Canada, the average cost is less than thirty-five cents.”
As we all know the technology already exists to make voting easier, and less costly:
The good news is the same innovative spirit and technological savvy that is making so many aspects of our lives easier—from travelling paper-free, to banking from home, to tracking on our smartphones how miles we’ve run or how many calories we’ve consumed—can also fix the problems with the way we vote. Digital technology and big data systems are continuing to change the world in which we live by helping us track massive amounts of data, protect against fraud, and democratize things that used to be the sole property of the elite and well-connected. It makes sense that those tools can help lead us to a more just and effective voting system as well.”
Interestingly the increase in voter apathy has been noticeable since the 1980s according to a 2002 report (http://www.idea.int/publications/vt/upload/VT_screenopt_2002.pdf) which shows Australia having a 94% turnout and Mali 21% in the period 1945-2001.
Microfinancing – what a great article (http://thenextweb.com/dd/2014/11/02/technology-transforming-saving-microfinance/) by Vikas Lalwani.  Microfinancing is basically lending small amounts of money to the poor to start up business or grow them. Sounds great, but are there problems?
To ensure high repayment rate, you need to employ people (field force) who will work in the field doing tasks like background checks, loan disbursement, follow-ups and collection. But these resources cost money, which proportionately increases the interest rate that poor borrowers will have to pay.  The global average interest and fee rate is estimated at 37 percent, with rates reaching as high as 70 percent in some markets. This means that world’s poorest are paying the highest interest rate and it defeats the whole purpose behind microfinance.”
But technology is helping solve these issues:
To break this stalemate, a new breed of microlending services is coming up which is taking everything online. There are no offices where they operate, no field force and no offices even for themselves. Zidisha and Kiva Zip are pioneers of this.  Zidisha, which is active in eight African countries and Indonesia, is mostly run by virtual volunteers with only two full-time employees. All loans are disbursed electronically and there are no offices at all. It filters fraudulent applications using machine-learning algorithms developed by Sift Science.  To calculate credit risk, it uses the services of Bayes Impact, another YC non-profit, which gets data scientist teams to tackle social problems. Zidisha has been able to bring down the interest rates to as low as 5.8 percent, an astronomical improvement over previously existing rates.”  Repayment rates have proved to be higher than in traditional forms of microlending.
Can mobile platforms help free us from basic needs? The chosen article (http://www.businessdailyafrica.com/Tech-firm-hits-on-solution-to-water-scarcity-in-slums-/-/1248928/1380090/-/item/1/-/14l1212z/-/index.html) talks about how technology is helping residents in the slum area of Kibera in Nigeria locate drinkable water nearby via SMS on their mobiles.  Previously people had to walk miles for their water, not knowing of there were supplies and having to carry 20 litre jerry-cans home with them.  The data gathered from the vendors is also helping to cut out malpractice:
The water company is further beset by the perennial headache of illegal connections, most of which are reported in slum areas and informal settlements.  Last year, illegal connections was reported to cause a disparity in the billing estimate and actual collections and Sh350 million was collected per month against a target of Sh450 million.  The data collected through m-maji could be used by water companies to weed out water vendors who have no licence to operate and those who are doing so through illegal connections thus saving up on lost revenue.”
How robots could be used to fight ebola (http://www.cnet.com/news/how-robots-could-be-used-to-fight-ebola/) is a great example of how technology can help protect humans against exposure to disease. Here we use robotics in the places where ebola has spread these robots can copy each and every gesture of the one who knows how to cure it...kindly watch the video.( https://www.youtube.com/watch?v=BHxNQ1v7mWM)
The “Age of Edison” (http://articles.latimes.com/2013/feb/22/entertainment/la-ca-jc-ernest-freeberg-20130224) is a great article about how the invention of the light bulb freed us up to live a 24 hour existence.  Thus in the Victorian age it became possible for factories to operate all night.  It appears that as sales of the invention grew, through private individuals, it became clear that state intervention was needed to ensure that it was available in more geographically remote places.  If we think about the case with broadband today, this is a similar situation.
In the Uk the Prime Minister has just announced the Government’s intention to make Wi-fi free on all trains (http://www.bbc.com/news/uk-politics-31421676) and eventually Govts will have to find funds to subsidise other technological advances to the underprivileged in society.
Adherents of “transhumanism”—a movement that seeks to transform Homo sapiensthrough tools like gene manipulation, “smart drugs” and nanomedicine—hail such developments as evidence that we are becoming the engineers of our own evolution. (http://www.smithsonianmag.com/science-nature/how-to-become-the-engineers-of-our-own-evolution-122588963/?no-ist).So why use these non-natural ways to grow the human species? ”
Transhumanists say we are morally obligated to help the human race transcend its biological limits; those who disagree are sometimes called Bio-Luddites. “The human quest has always been to ward off death and do everything in our power to keep living,” says Natasha Vita-More, chairwoman of Humanity+, the world’s largest trans­humanist organization, with nearly 6,000 members.”And the downsides? “Some worry about the implications of transcendent technologies. Political scientist Francis Fukuyama, the author of “The End of History?” and a former member of the President’s Council on Bioethics, warns that efforts to rid ourselves of negative emotions could have unforeseen side effects, making us less human. “If we weren’t violent and aggressive, we wouldn’t be able to defend ourselves,” he wrote in Foreign Policy. “If we never felt jealousy, we would also never feel love.”
 David Bollier writes about sousveillance as a response to surveiilance(http://bollier.org/blog/sousveillance-response-surveillance) and offers the following definition:
Surveillance, of course, is the practice of the powerful monitoring people under their dominion, especially people who are suspects or prisoners – or today, simply citizens. Sousveillance — “to watch from below” – has now taken off, fueled by an explosion of miniaturized digital technologies and the far-reaching abuses of the surveillance market/state.”
He continues “……sousveillance is an inevitable trend in technological societies and that, on balance, it “has positive survival characteristics.”  Sousveillance occurs when citizens record their encounters with police, for example. This practice exposed the outrageous police brutality against Occupy protesters (blasts of pepper spray in their faces at point-blank range) and helped transform small citizen protests against Wall Street into a global movement.”
What is the main reason for the need for sousveillance? “Sousveillance at least has thevirtue of empowering ordinary people to protect themselves and to hold power accountable.  One need only look at a few cautionary examples of “people’s recordings” that have altered history:  the Zapruder film (undermining the Warren Commission’s and news media’s “lone gunman” claims), the Rodney King video (documenting L.A. police brutality), the videos of police violence against Occupy protesters and anti-Iraq War demonstrators.  At a time when powerful corporations and government agencies are savagely violating our privacy with impunity, sousveillance is entirely comparable to the use of personal cryptography:  a defense of our individual autonomy and our ability to sustain a free civil society. “
In McKinsey’s September 2014 article “Offline and falling behind: Barriers to Internet adoption” it says that More than 60 percent of the world’s population remains offline. Without removing crucial deterrents to Internet adoption, little will change—and more than 4 billion people may be left behind.”
The other 40% of the population has seen the following: “In a little more than a generation, the Internet has grown from a nascent technology to a tool that is transforming how people, businesses, and governments communicate and engage. The Internet’s economic impact has been massive, making significant contributions to nations’ gross domestic product (GDP) and fueling new, innovative industries. It has also generated societal change by connecting individuals and communities, providing access to information and education, and promoting greater transparency.”
The good news is that at the current trajectory, an additional 500 million to 900 million people are forecast to join the online population by 2017.  But this growth rate will not continue.  On the negative side “About 75 percent of the offline population is concentrated in 20 countries and is disproportionately rural, low income, elderly, illiterate, and female.”
The offline population faces barriers to Internet adoption spanning four categories:
Incentives – such as the high costs that content and service providers face in developing and localizing relevant content and services and their associated business model constraints, low awareness or interest from brands and advertisers in reaching certain audiences, a lack of trusted logistics and payment systems
Low income and affordability – there is often a lack of adjacent infrastructure (such as roads and electricity), thereby increasing the costs faced by network operators in extending coverage
User capability – such as a lack of digital literacy (that is, unfamiliarity with or discomfort in using digital technologies to access and use information) and a lack of language literacy (that is, the inability to read and write)
Infrastructure – Barriers in this area include a lack of mobile Internet coverage or network access in addition to a lack of adjacent infrastructure such as grid electricity. The root causes of these consumer barriers include limited access to international bandwidth; an underdeveloped national core network, backhaul, and access infrastructure; limited spectrum availability; a national information and communications technology (ICT) strategy that doesn’t effectively address the issue of broadband access; and underresourced infrastructure development.
The report then goes on to identify countries where adoption is low due to high barriers, mainly in Africa and Asia, to a higher 80% in the USA for example.  In summary they conclude: “Going forward, sustained, inclusive Internet user growth will require a multipronged strategy—one that will depend on close collaboration among players across the ecosystem, including governments, policy makers, nongovernmental organizations, network operators, device manufacturers, content and service providers, and brands.”

In considering the power of technology to liberate the introverted from stage fright(http://diymusician.cdbaby.com/2014/04/liberating-introverted-performing-artist/) James Wasem talks about the ability to set up online and stream a performance.  The technology is fairly simple, and you can even make performances a pay per view when you get a following.
Wael Ghonim wrote a book called revolution 2.0 to describe his part in starting a people’s revolution against the treatment of the police on civilians in Egypt in February 2012.  In it he talks about how his ” Facebook page anonymously called for accountability for Khaled (Said)’s death and an end to corruption within the Egyptian government.  We [wanted] to expose the bad practices of the Egyptian police,” he says. “Because the last thing a dictator wants is that you expose their bad practices to its people.”
He kept his identity a secret, “”I basically thought that my anonymity was my power, was the reason this page was so powerful,” he says. “A lot of people believed in what was there.”  He goes on to say that the revolution does not take place ON Social media, but that it allows communication to happen – “”We used all the available tools in order to communicate with each other, collaborate and agree on a date, a time and a location for the start of the revolution,” he says. “Yet, starting Jan. 28, the revolution was on the streets. It was not on Facebook, it was not on Twitter. Those were tools to relay information, to tell people the truth about what’s happening on the ground.”
The MIT conference on the future of transportation(http://senseable.mit.edu/roadahead/) took place in November 2014.  It says of the future:
Alternate models of sharing and reconfigured access to mobility are fundamentally disrupting the transportation paradigm in cities. With the emergence of companies like Uber and ZipCar, citizens are rethinking private car ownership, while bike share systems are competing with traditional public and private transportation options, and start up companies are creating innovative new service models. Yet these citizen-centric developments might come into conflict with traditional incumbents such as taxis, car companies and regulators.”
The other issue with the Google type driverless car is the question of safety and regulations.  Insurance companies struggle on who to apportion blame to for accidents.
Should we try to raise the dead..?
The ability to internally bridge the gap between two ends of severed spinal cord—not just rely on the support of an external carapace like the Ekso-Suit—would be nothing short of revolutionary for the neurosurgical field. Oh wait, looks like a team from the EPFL has just invented a way to do just that—in mice.
The implantable device, dubbed e-Dura, is the brainchild of professors Stéphanie Lacour and Grégoire Courtine from Switzerland's École polytechnique fédérale de Lausanne research institution. The team had already developed a means of reinvigorating the function of partially severed spinal columns in lab rats through a combination of electrical and chemical stimulation—an incredible feat in its own right. But in order to apply the same method to humans, they'd need to implant a stimulation device directly to the spine for long periods of time.
This has never been possible before—especially in the delicate neurological system beneath the brain and spine's protective "dura mater" level—because any foreign body left in there causes near immediate inflammation and rejection. But the EPFL team's e-Dura device has been designed specifically to avoid rejection. They did so by making it stretchy.
They made it exactly as stretchy as the tissue surrounding it so that rather than sit atop the site like a medical-grade lump, it moves and flexes with the rest of the spine, minimizing friction with the dura mater.
"Our e-Dura implant can remain for a long period of time on the spinal cord or the cortex, precisely because it has the same mechanical properties as the dura mater itself. This opens up new therapeutic possibilities for patients suffering from neurological trauma or disorders, particularly individuals who have become paralyzed following spinal cord injury," Stéphanie Lacour, co-author of the paper explained in a press release.
The team has already successfully implanted a prototype of the device in a rat subject. It's not only been it there for more than two months without the threat of rejection, the device helped get the rat up and walking around again after just a few weeks of training. Should this technology make it past human safety trials, paralysis may one day be as common as polio. 


A. The right to sensory privacy

Surveillance is often done in secret, through a network of hidden cameras. And cameras are often concealed in dark hemispherical domes so that we cannot see which way they are “looking”.

Imagine if we all walked around wearing such domes so that people could not see which way we were looking. It is impolite to stare, but surveillance cameras have been granted the right or affordance to bypass such politeness.

Whereas “sight” has now been granted to inanimate objects like buildings and light posts, which are exempt from social rules, humans should at least have a right to their own senses, and a right to secrecy or privacy regarding their functionality (i.e. not having to disclose whether or not one is recording). A person using a vision aid, or visual memory aid, should not have to disclose the fact that they are differenlty abled. And a person recording an encounter with a robber or a (possibly corrupt) police officer should not need to disclose (and therefore risk violence) the nature of their senses.
Just as buildings keep secrets about their surveillance sys-tems “for security reasons”, people should be able to too! Thus a person should not need to prove that they are disabled before being “allowed” to use a camera. Likewise it would be absurd if one needed special permission to use a cane, or to wear eyeglasses, regardless of a lesser or greater need that may exist for these items.

VII. MY PROPERTY, MY RULES!!!

A simple (though somewhat naive) form of sensory entitle-ment goes as follows: This is my store [or mall or gas station, or city], and if you want to shop [or come] here you need to play by my rules, which means no cameras!.

This propertarian model of veillance, in effect, defines surveillance as recording one’s own property (e.g. a depart-ment store recording their own premeses, or a city’s police force recording “their” streets), and sousveillance as recording someone else’s property (e.g. a shopper or citizen recording the aisles of a store they don’t own, or a street they don’t own).

This model is problematic. (1) If property ownership were absolute, then it must also factor in the absolute ownership of one’s own senses, sensory information, body, clothes, eye-glasses, and the like as personal property and personal space. In this sense there is an intersection of two different absolute properties, i.e. one absolute property inside another absolute property. And it can get even more complicated: Consider entity A driving a car owned by entity B, parked in an auto mechanic shop owned by entity C, while witnessing a crime being perpetrated by entity D, in a city governed by entity E, in state F of country G, etc... — A has a moral and ethical duty to witness and record the crime regardless of what B, C, D, E, etc... wish.

(2) Property ownership is actually not absolute. Human life is a more fundamental value than the property rights of another person. Therefore the most morally and ethically right thing for A to do is to secretly record the activities taking place, regardless of any rules set forth by B, C, D, etc.. And if property owners continue to enforce such absolutist rules, then manufacturers have a moral and ethical duty to favour human health and safety by making computerized vision aids and the like as covert as possible. Thus sousveillance is inevitible, either by becoming acceptable, or becoming covert (with strong moral and ethical justification) by design.

The boundaries of private property range from complete abolishment (e.g. certain forms of communism) to, at the other extreme, excesses that lead to a “tragedy of the anti-commons” effect of extreme underutilization of resources [62]. A full understanding of the boundaries of private property enters into such concepts as nail houses, spite houses, and spite fences [62], [63]. From these concepts the author also extrapolates/introduces the concept of spite veillance (both spite surveillance and spite sousveillance), as for example, the spite fence case of Gertz v. Estes, 879 N.E.2d 617 (Ind. App. 2008) involving also surveillance cameras installed merely to annoy a neighbour. But where does legitimate artistic social commentary, for example, play into this matter? Consider,
for example, the legitimate use of sousveillance as a form of critical inquiry in public, semi-public, and private business establishments [64], [65].

Many issues regarding veillance relate to property, and defense of property3.
VIII. COPYRIGHT, COPYLEFT, AND SUBJECTRIGHT

Surveillance (mounting cameras on property like land and buildings) tends to favour property rights, as opposed to sousveillance (mounting cameras on people) which tends to favour human needs more directly. Another area where this property versus human favoritism is evident is in the domain of intellectual property, trade secrets, national security/secrecy, and copyright.

The purpose of copyright and related rights is twofold: to encourage a dynamic creative culture, while returning value to creators so that they can lead a dignified economic existence, and to provide widespread, affordable access to content for the public.” – www.wipo.int/copyright/

It has been argued that commercial entities and powerful lobbying groups have subverted the public’s interest through excessive restrictions on fair use [62], as well as through implementations of technologies that restrict fair use. For example, the technologies discussed in Section IV-C have been applied to detect and sabotage cameras in movie theatres, and as discussed, such technologies problematize fair use with

regards to use of computerized vision aid.

To understand copyright, consider a simple example of photographing a person. Consider the three entities:

  1. the subject;

  1. the photographer (“transmitient”); and
  2. a recipient of the image (the person viewing the photograph).

Copyleft [66], if used, protects, to some degree, the recip-ient. Copyright laws protect the photographer, but adequate protection of the subject of the photograph is often absent. Some subject protection exists, e.g. in France or Quebec (Canada), “Le droit a´ l’image” (image rights) of the subject, but these rights are stripped away in many cases such as news reportage, or surveillance.

Recently the concept of Subjectrights (denoted by a circled “S” in contrast to the circled “C” of copyright) has been proposed for the protection of such “passive contributions”. It is useful to consider Irving Goffman’s distinction between that which we “give off” (passive contributions) and that which we “give” (active contributions). Copyright protects only the latter, and not the former. An example of a signed Subjectright agreement between a subject and a photographer with Canadian Broadcasting Corporation is shown in Fig 9.
Thus the veillance between (1) and (2) is asymmetric at best. Regarding the veillance between (2) and (3), this is also asym-metric. The recipient of the information has much less rights than the “transmitient” (sender/creator/author/photographer).
The word “copyright”, if read literally, ought to mean “the right to copy”. Copyright enforcement ought to mean the enforcement of the right to copy (e.g. enforcement of fair use access rights). These “fair use enforcements” ought to include access requirements for persons with special needs. Currently, due to copy protection mechanisms, copyright material is often inaccessible to persons with special needs. As copy protection can exclude such fair use, its moral imperative is immoral in and of itself. As we age, many of us will replace portions of our mind/brain with computer systems, giving rise to the Silicon Brain / Silicon Mind / Mind Mesh [4]. A person with Alzheimer’s who has a silicon brain/mindmesh cannot be legally, ethically, or morally excluded from viewing copyrighted material, (e.g. a movie theatre). Additionally, more and more people will likely wear lifelong recording devices (Fig 10).

In this way it will be impossible, or at least morally, ethically, and legally troublesome, for a movie theatre owner or anyone else to prevent a movie from being “recorded” (re-membered) for strictly personal usage. Accordingly, copyright restrictions already are (or will have to be) based on preventing dissemination, as mere acquisition for personal use must be co nsidered fair use.

Similarly in matters of national or corporate security, once wearable and implantable computing becomes common-place [4], we will have to learn to accept the “cyborg” being as a human being. It will all have to come down to mutual trust, and no longer the one-sided trust of the totalitarian or surveillance-only society.

Would it be right to prohibit artist Stephen Wiltshire from seeing a movie or deny him employment in a job interview because he has a photographic memory? Yes, there is a danger he could violate copyright or expose corporate or national secrets. But simply having a good memory should not be grounds for dissmissal or rejection. And whereas the courts already have redress for such violations of copyright or trade/national secrets, regardless of whether they were done with natural or computerized memory, assistive techologies and the good and prosperity that wearable computing will bring to society is inevitible. Moreover, perhaps the best way to prevent abuse of sousveillance (e.g. voyeurism, extortion, etc.) is more sousveillance. For example, extortion requires secrecy, such that a person trying to threaten an entity with revealing recorded secrets might actually be caught in the act by way of the very technology used to perpetrate the crime. IX. THE INEVITIBILITY OF SOUSVEILLANCE: UNIVERSAL

NEEDS RATHER THAN INDIVIDUAL WANTS

Sousveillance is not merely a self-centered or narcissistic entitlement or human right/freedom. Rather, it meets universal human needs — wayfinding, personal safety, justice, and prosperity — in the service of all of humanity — even when only used by a small percentage of the people in a society.

Consider two parallel societies, a McVeillance/Surveillance Society [68] (where only surveillance is allowed), and a “Veillance Society” (where both veillances are allowed, and participatory veillance is encouraged).

The Veillance Society meets basic needs of human secu-rity [69] and personal safety — for everyone — not just the safety and security of property and merchandise, or of persons in high places (“sur”). In environments where surveillance cameras are already being used, i.e. where there is already a reduced expectation of privacy, sousveillance meets the needs of sight, personal safety, human security, and the like, and people enjoy a higher quality of life.

Whereas some individual shopkeepers and some police would be upset with such two-sided Veillance, the society as a whole will tend to be more balanced, just, prosperous, and “livable”. Corrupt police, department stores with their fire exits illegally chained shut, and the like, will likely be revealed. And the society as a whole will enjoy greater information and knowledge about how the society works, and what is happening — from things as simple as “How do I find my way back to my car?” to more complex things like “Is that politician accepting a bribe from the Chief of Police?”.

A new market economy in AR products and services will flourish. The Veillance Society will tend to enjoy greater pros-perity and people will want to migrate from the McVeilance society to the Veillance Society, assuming they are free to migrate. If they are not free to do so (i.e. if they are held prisoner in the McVeillance society), then they will likely be less happy, less productive, and the McVeillance Society will not be able to escape the resulting decrease in prosperity.
X. CONCLUSION AND DECONCLUSION
Sousveillance (e.g. wearable cameras and Digital Eye Glass) and surveillance must co-exist, giving rise to a “Veillance Society”. This will bring an end to the Suveillance Society that began to emerge in recent history. But will sousveillance be co-opted by centralized “cloud control”4? Will surveillance be rev-opted as “unterveillance”? It is still too early to know

as an emerging field, much work remains to be done! That work needs to be in the field of “Veillance Studies” and praxis, and needs to encompass sur/sousveillance, Clarke’s dataveillance, Michael’s Uberveillance [70], [71], and all other veil-lances — hence the formation of the Survillance.


The Future of Transportation
Self-driving vehicles. Drivers on demand. Data-driven infrastructure. Vehicles that respond to passengers and to the environment… A sea of
change is happening in transportation, and mobility of the (near) future will be radically different than today — greener, more comfortable
and more efficient. Innovations are rolling out of laboratories, businesses and city halls on four, two, (or zero) wheels at an accelerating
pace, exploring the future of urban mobility.
The global spotlight is focused on transportation technology and design — the machines that move people — yet there are a host of
unanswered questions as transitions toward future technologies are made. This year, California began issuing drivers licenses to
self-driving cars, but insurance companies still can't find who is at fault when something goes wrong. Cities are debating whether ride
sharing systems should be banned from their streets, while taxi companies organize strikes around the world to protest citizen-driver
services like Lyft and Uber. Policy and innovation must go hand in hand for innovations to take hold.
The Road Ahead will be a forum on all dimensions of future urban mobility, bringing leading theorists, dreamers, and practitioners into
conversation and debate — from designers to financiers, from policy makers to provocateurs. One full day of conversation and presentation
will seek to showcase innovations, address challenges, and holistically explore the future of moving from A to B - from self-driving, sharing,
policy, legality and risk and society at large.
Speakers include cutting-edge researchers from MIT and Harvard, leaders in the transportation industry, protagonists of start ups in
mobility and public officials.
Alternate models of sharing and reconfigured access to mobility are fundamentally disrupting the transportation paradigm in
cities. With the emergence of companies like Uber and ZipCar, citizens are rethinking private car ownership, while bike share
systems are competing with traditional public and private transportation options, and start up companies are creating
innovative new service models. Yet these citizen-centric developments might come into conflict with traditional incumbents
such as taxis, car companies and regulators. In this session, the mobility game changers themselves will add their voices to the
debate.
Questions: How are ownership models evolving? How are incumbent modes at risk/enhanced? How do people access mobility
options (and how do we make it equitable)? What are the financial aspects? How are public and private interests at conflict, or
complementary?
Session 2. Driverless Cities
While visions of self-driving vehicles have been the providence of science fiction, the attention of many has been captured by
recent examples like the DARPA Challenge and the Google Self-Driving Car. Tesla’s Autopilot features get that company’s
offerings to near self-driving to consumers in the coming months, and the State of California has begun offering licenses to
drivers” of autonomous vehicles. Unanswered, however, are the questions of regulation and safety. This session seeks to
tease out the larger, societal implications of the technology to understand the truer nature of the “self-driving future”.
Questions: Who’s responsible in the case of an accident, and can you sue code? Beyond advances in technology and cars, what
will be the urban impact of autonomous vehicles? What impacts on policy, regulation and finance are present with this new
technology? How do we deal with risk, and who’s at fault? How does the city change in its form? Do we need as many roads?
What are the projective? How is this in conflict, enhanced by the trends we see today with new drivers,including with new ownership trends. Session 3. Data Driven Mobility
Urban spaces are creating an unprecedented amount of data, from mobile phone data about individuals to the traces created
automatically from machine-to-machine interactions. This session seeks to explore how the ubiquity of “big data” and
increasing prevalence of situated technology is changing how decisions are made about policy, urban planning and citizen
behavior with regard to mobility.
Questions: How does that data show the mobility patterns of our city? How does that enable us to design/create better transit
systems? What new dynamics and options in mobility are being created? How is data being collected, and what are the privacy
considerations? What is the agency of the individual? Are individuals simply consumers (of their own data)?
Greatest innovation might not by systems themselves but how they interface with each other. Ubiquitous computing and new
technologies are enabling the creativity of pioneers and inventors in creating new options and experiences that enhance the
process of getting from point A to B. We explore the new mobility portfolio & intermodal innovations, as well as the visions
driving a new future not yet written.
Questions: What the opportunities gained from these new modes/inventions? What are the challenges to innovation/disruption?
What are the friction-points in mobility today? How do policymakers nurture an innovative ecosystem and how can cities able to be living laboratories to these innovations?
Important Points
The Lazarus Effect: Pushing the Frontiers of Resuscitation – This refers to the 1997 experiment where conductors were brought back to life at temperatures below -143 degrees celcius.  It is also the name of a 2015 Horror movie where “Medical researcher Frank (Mark Duplass), his fiancee Zoe (Olivia Wilde) and their team have achieved the impossible: they have found a way to revive the dead. After a successful, but unsanctioned, experiment on a lifeless animal, they are ready to make their work public. However, when their dean learns what they’ve done, he shuts them down. Zoe is killed during an attempt to recreate the experiment, leading Frank to test the process on her. Zoe is revived — but something evil is within her.”  There is a book of the same name from1983.
NB  Lazarus of Bethany, a figure in the Gospel of John, which describes him being raised by Jesus from the dead.
To Baby or Not to Baby: The Impact of Reproductive Technology –A great introduction from child-encyclopedia.com:
Research on the psychological development of children in assisted reproduction families has focused on two major types of assisted reproduction:
  1. High-tech” procedures include in vitro fertilization (IVF) and intracytoplasmic sperm injection (ICSI). IVF involves the fertilization of an egg with sperm in the laboratory and the transfer of the resulting embryo to the mother’s womb. With ICSI, a single sperm is injected directly into the egg to create an embryo.
  2. Gamete donation includes donor insemination and egg donation. Donor insemination involves the insemination of a woman with the sperm of a man who is not her husband or partner. The child produced is genetically related to the mother but not the father. Egg donation is like donor insemination in that the child is genetically related to only one parent, but in this case the mother is the parent with whom the child shares no genetic link. Egg donation is a much more complex and intrusive procedure than donor insemination and involves IVF techniques.
Problems
The key problems in this area of investigation are as follows:
  • The higher incidence of multiple births, preterm births, and low birthweight infants following IVF and ICSI. Th e impact of these factors on child development must be considered separately from the impact of IVF and ICSI per se. Many of the empirical investigations have focused on families with a singleton (only) child to avoid the confounding effect of a multiple birth.
  • Mothers of IVF children are generally older than mothers who give birth without medical intervention, and attempts to match natural conception mothers for maternal age have presented difficulties, as has matching for birth order of the target child and number of children in the family, although some researchers have attempted to statistically control for these variables.




-->

No comments:

Post a Comment