Monday 31 December 2012

FACE RECOGNITION ---- ABSTRACT AND SEMINARS



                                      Abstract of Face Recognition Using Neural Network

A neural network is a powerful data modeling tool that is able to capture and represent complex input/output relationships . In the broader sense, a neural network is a collection of mathematical models that emulate some of the observed properties of biological nervous systems and draw on the analogies of adaptive biological learning. It is composed of a large number of highly interconnected processing elements that are analogous to neurons and are tied together with weighted connections that are analogous to synapses.

To be more  clear, let us study the model of a neural network with the help of figure.1. The most common neural network model is the multilayer perceptron (MLP). It is composed of hierarchical layers of neurons arranged so that information flows from the input layer to the output layer of the network. The goal of this type of network is to create a model that correctly maps the input to the output using historical data so that the model can then be used to produce the output when the desired output is unknown.

Neural network is a sequence of neuron layers. A neuron is a building block of a neural net. It is very loosely based on the brain's nerve cell. Neurons will receive inputs via weighted links from other neurons. This inputs will be processed according to the neurons activation function. Signals are then passed on to other neurons.
In a more practical way, neural networks are made up of interconnected processing elements called units which are equivalent to the brains counterpart ,the neurons.

Neural network can be considered as an artificial system that could perform "intelligent" tasks similar to those performed by the human brain. 
Neural networks resemble the human brain in the following ways:

1. A neural network acquires knowledge through learning.

2. A neural network's knowledge is stored within inter-neuron connection strengths known as synaptic weights.
3. Neural networks modify own topology just as neurons in the brain can die and new synaptic connections grow.

Why we choose face recognition over other biometric?

There are a number reasons to choose face recognition. This includes the following :

1.     It requires no physical inetraction on behalf of the user.
2.      It is accurate and allows for high enrolment and verification rates.
3.      It does not require an expert to interpret the comparison result.
4.      It can use your existing hardware infrastructure, existing camaras and image capture devices will work with no problems.
5.      It is the only biometric that allow you to perform passive identification in a one to many environment (eg: identifying a terrorist in a busy Airport terminal.

The face is an important part of who you are and how people identify you. Except in the case of identical twins, the face is arguably a person's most unique physical characteristics. While humans have the innate ability to recognize and distinguish different faces for millions of years , computers are just now catching up. For face recognition there are two types of comparisons .the first is verification. This is where the system compares the given individual with who that individual says they are and gives a yes or no decision. The second is identification. This is where the system compares the given individual to all the other individuals in the database and gives a ranked list of matches. All identification or authentication technologies operate using the following four stages:

1. capture: a physical or behavioural sample is captured by the system during enrollment and also in identification or verification process.
2. Extraction: unique data is extracted from the sample and a template is created.
3. Comparison: the template is then compared with a new sample.
4. Match/non match : the system decides if the features extracted from the new sample are a match or a non match.

Face recognition starts with a picture, attempting to find a person in the image. This can be accomplished using several methods including movement, skin tones, or blurred human shapes. The face recognition system locates the head and finally the eyes of the individual. A matrix is then developed based on the characteristics of the individual’s face. The method of defining the matrix varies according to the algorithm (the mathematical process used by the computer to perform the comparison). This matrix is then compared to matrices that are in a database and a similarity score is generated for each comparison.

Artificial intelligence is used to simulate human interpretation of faces. In order to increase the accuracy and adaptability, some kind of machine learning has to be implemented.

There are essentially two methods of capture. One is video imaging and the other is thermal imaging. Video imaging is more common as standard video cameras can be used. The precise position and the angle of the head and the surrounding lighting conditions may affect the system performance. The complete facial image is usually captured and a number of points on the face can then be mapped, position of the eyes, mouth and the nostrils as a example. More advanced technologies make 3-D map of the face which multiplies the possible measurements that can be made. 

Thermal imaging has better accuracy as it uses facial temperature variations caused by vein structure as the distinguishing traits. As the heat pattern is emitted from the face itself without source of external radiation these systems can capture images despite the lighting condition, even in the dark. The drawback is high cost. They are more expensive than standard video cameras. 

Face recognition technologies have been associated generally with very costly top secure applications. Today the core technologies have evolved and the cost of equipments is going down dramatically due to the intergration and the increasing processing power. Certain application of face recognition technology are now cost effective, reliable and highly accurate. As a result there are no technological or financial barriers for stepping from the pilot project to widespread deployment.

   TO DOWNLOAD THIS ABSTRACT CLICK ON THE BELOW LINK

                                    http://www.ziddu.com/download/21240160/facerecog.docx.html

 References 

1. ELECTRONICS FOR YOU- Part 1 April 2001 & Part 2 May 2001
2. ELECTRONIC WORLD - DECEMBER 2002
3. MODERN TELEVISION ENGINEERING- Gulati R.R
4. IEEE IN TELLIGENT SYS TEMS - MAY/JUNE 2003
5. WWW.FACEREG.COM
6. WWW. IMAGESTECHNOLOGY.COM
7. WWW.IEEE.COM

Tuesday 18 December 2012

CELLULAR POSITIONING --- ABSTRACT & SEMINARS

                                                   CELLULAR POSITIONING

Introduction:

          Location related products are the next major class of value added services that mobile network operators can offer their customers. Not only will operators be able to offer entirely new services to customers, but they will also be able to offer improvements on current services such as location-based prepaid or information services. The deployment of location based services is being spurred by several factors:

Competition :
 
            The need to find new revenue enhancing and differentiating value added services has been increasing and will continue to increase over time. Regulation The Federal Communications Commission (FCC) of the USA adopted a ruling in June 1996 (Docket no. 94-102) that requires all mobile network operators to provide location information on all calls to "911", the emergency services. The FCC mandated that by 1 st October 2001, all wireless 911 calls must be pinpointed within125 meters, 67% of the time. On December 24 1998, the FCC amended its ruling to allow terminal based solutions as well as network based ones (CC Docket No. 94-102, Waivers for Handset-Based Approaches). There are a number of regulations that location based services must comply with, not least of all to protect the privacy of the user. Mobile Streams believes that it is essential to comply with all such regulations fully. However, such regulations are only the starting point for such services- there are possibilities for a high degree of innovation in this new market that should not be overlooked.

 Technology
        There have been continuous improvements in handset, network and positioning technologies. For example, in 1999, Benefon, a Finnish GSM and NMT terminal vendor launched the ESC! GSM/ GPS mapping phone.

Needs Of Cellular Positioning:
 
            There are a number of reasons why it is useful to be able to pinpoint the position of a mobile telephone, some of which are described below. Location-Sensitive billing Different tariff can be provided depending upon the position of the cell phone. This allows the operator without a copper cable based PSTN to offer competitive rates for calls from home or office. Increased subscriber safety. A significant number of emergency calls like US.911 are coming from cell phones, and in most of the cases the caller can not provide the accurate information about their position. As a real life example let us take the following incident. In February 1997 a person became stranded along a highway during a winter blizzard (Associated press 1997).She used her cellular phone to call for help but could not provide her location due to white-out conditions. To identify the callers approximate position authorities asked her to tell them when she could hear the search plane flying above. From the time of her first call forty hours elapsed before a ground rescue team reached her. An automatic positioning system would have allowed rescuers to reach her far sooner.

Positioning Techniques :
 
            There are a variety of ways in which position can be derived from the measurement of signals and these can be applied to any cellular system including GSM. The important measurements are the Time of Arrival (TOA), the Time Difference of Arrival (TDOA), the Angle of Arrival (AOA) and Carrier phase. All these measurement put the object to be positioned on a particular locus. Multiple measurements give multiple loci and the point of their intersection gives the position. If the density of the base stations is such that more measurements can be done than required then a least square approach can be used. If the measurements are too few in number the loci will intersect at more than one point result in ambiguous position estimate. In the following discussion we assume that the mobile station and base station are lying in the same plane. This is approximately true for most networks unless the geography include hilly topology or high rise buildings.

Time of Arrival (TOA):
 
           In a remote positioning system this involves the measurement of the propagation time of a signal from the mobile phone to a base station. Each measurement fixes the position of the mobile on a circle. With two stations there will be two circle and they can intersect in a maximum of two points. This gives rise to an ambiguity and it is resolved by including a priory information of the trajectory of the mobile phone or making a propagation time measurement to a third base station.
           The TOA measurement requires exact time synchronization between the base stations and the receiver should have an accurate clock, so that the receiver knows the exact time of transmission and an exact TOA measurement have made by the receiver.

Monday 17 December 2012

NON VISIBLE IMAGING ---- ABSTRACT & SEMINARS

                                                              NON VISIBLE IMAGING


             Near infrared light consists of light just beyond visible red light (wavelengths greater than 780nm). Contrary to popular thought, near infrared photography does not allow the recording of thermal radiation (heat). Far-infrared thermal imaging requires more specialized equipment. Infrared images exhibit a few distinct effects that give them an exotic, antique look. Plant life looks completely white because it reflects almost all infrared light (because of this effect, infrared photography is commonly used in aerial photography to analyze crop yields, pest control, etc.) The sky is a stark black because no infrared light is scattered. Human skin looks pale and ghostly. Dark sunglasses all but disappear in infrared because they don't block any infrared light, and it's said that you can capture the near infrared emissions of a common iron.
            
             Infrared photography has been around for at least 70 years, but until recently has not been easily accessible to those not versed in traditional photographic processes. Since the charge-coupled devices (CCDs) used in digital cameras and camcorders are sensitive to near-infrared light, they can be used to capture infrared photos. With a filter that blocks out all visible light (also frequently called a "cold mirror" filter), most modern digital cameras and camcorders can capture photographs in infrared. In addition, they have LCD screens, which can be used to preview the resulting image in real-time, a tool unavailable in traditional photography without using filters that allow some visible (red) light through.

INTRODUCTION:

             Near infrared light consists of light just beyond visible red light (wavelengths greater than 780nm). Contrary to popular thought, near infrared photography does not allow the recording of thermal radiation (heat). Far-infrared thermal imaging requires more specialized equipment. Infrared images exhibit a few distinct effects that give them an exotic, antique look. Plant life looks completely white because it reflects almost all infrared light (because of this effect, infrared photography is commonly used in aerial photography to analyze crop yields, pest control, etc.) The sky is a stark black because no infrared light is scattered. 

            Human skin looks pale and ghostly. Dark sunglasses all but disappear in infrared because they don't block any infrared light, and it's said that you can capture the near infrared emissions of a common iron

             Near-infrared (1000 - 3000 nm) spectrometry, which employs an external light source for determination of chemical composition, has been previously utilized for industrial determination of the fat content of commercial meat products, for in vivo determination of body fat, and in our laboratories for determination of lipoprotein composition in carotid artery atherosclerotic plaques. Near-infrared (IR) spectrometry has been used industrially for several years to determine saturation of unsaturated fatty acid esters (1). Near-IR spectrometry uses an tunable light source external to the experimental subject to determine its chemical composition.

           Industrial utilization of near-IR will allow for the in vivo measurement of the tissue-specific rate of oxygen utilization as an indirect estimate of energy expenditure. However, assessment of regional oxygen consumption by these methods is complex, requiring a high level of surgical skill for implantation of indwelling catheters to isolate the organ under study.

NUCLEAR BATTERIES -DAINTIEST DYNAMICS

                                      NUCLEAR BATTERIES -DAINTIEST DYNAMICS 


           Micro electro mechanical systems (MEMS) comprise a rapidly expanding research field with potential applications varying from sensors in air bags, wrist-warn GPS receivers, and matchbox size digital cameras to more recent optical applications. Depending on the application, these devices often require an on board power source for remote operation, especially in cases requiring for an extended period of time. In the quest to boost micro scale power generation several groups have turn their efforts to well known enable sources, namely hydrogen and hydrocarbon fuels such as propane, methane, gasoline and diesel. Some groups are developing micro fuel cells than, like their micro scale counter parts, consume hydrogen to produce electricity. Others are developing on-chip combustion engines, which actually burn a fuel like gasoline to drive a minuscule electric generator. But all these approaches have some difficulties regarding low energy densities, elimination of by products, down scaling and recharging. All these difficulties can be overcome up to a large extend by the use of nuclear micro batteries.
           
           Radioisotope thermo electric generators (RTGs) exploited the extraordinary potential of radioactive materials for generating electricity. RTGs are particularly used for generating electricity in space missions. It uses a process known as See-beck effect. The problem with RTGs is that RTGs don't scale down well. So the scientists had to find some other ways of converting nuclear energy into electric energy. They have succeeded by developing nuclear batteries.

NUCLEAR BATTERIES

            Nuclear batteries use the incredible amount of energy released naturally by tiny bits of radio active material without any fission or fusion taking place inside the battery. These devices use thin radioactive films that pack in energy at densities thousands of times greater than those of lithium-ion batteries. Because of the high energy density nuclear batteries are extremely small in size. Considering the small size and shape of the battery the scientists who developed that battery fancifully call it as "DAINTIEST DYNAMO". The word 'dainty' means pretty.

            Scientists have developed two types of micro nuclear batteries. One is junction type battery and the other is self-reciprocating cantilever. The operations of both are explained below one by one.

 JUNCTION TYPE BATTERY

           The kind of nuclear batteries directly converts the high-energy particles emitted by a radioactive source into an electric current. The device consists of a small quantity of Ni-63 placed near an ordinary silicon p-n junction - a diode, basically.

WORKING:

            As the Ni-63 decays it emits beta particles, which are high-energy electrons that spontaneously fly out of the radioisotope's unstable nucleus. The emitted beta particles ionized the diode's atoms, exciting unpaired electrons and holes that are separated at the vicinity of the p-n interface. These separated electrons and holes streamed away form the junction, producing current.

PUSH TECHNOLOGY

                                                              PUSH TECHNOLOGY

        
        Push technology reverses the Internet's content delivery model. Before push, content publishers had to reply upon the end-users own initiative to bring them to a web site or download content. With push technology the publisher can deliver a content directly to the users PC, thus substantially improving the likelihood that the user will view it. Push content can be extremely timely, and delivered fresh several times a day. Information keeps coming to user whatever he asked for it or not. The most common analog for push technology is a TV channel; it keeps sending us stuff whether we care about it or not.
            
            Push was created to alleviate two problems facing users of net. The first problem is information overload. The volume and dynamic nature of content on the internet is a impediment to users, and has become an ease-of -use of issue. Without push applications can be tedious, time consuming, and less than dependable. Users have to manually hunt down information, search out links, and monitor sites and information sources. Push applications and technology building blocks narrow that focus even further and add considerable ease of use. The second problem is that most end-users are restricted to low bandwidth internet connections, such as 33.3 kbps modems, thus making it difficult to receive multimedia content. Push technology provides means to pre-deliver much larger packages of content.
             
            Push technology enables the delivery of multimedia content on the internet through the use of local storage and transparent content downloads. Like a faithful delivery agent, push, often referred to as broadcasting, delivers content directly to user transparently and automatically. It is one of the internet's most promising technologies.

             Already a success, push is being used to pump data in the form of news, current affairs and sports etc, to many computers connected to the internet.Updating software is one of the fastest growing uses of push. It is a new and exciting way to manage software update and upgrade hassles. Using the internet today without the aid of a push application can be a tedious, time consuming, and less than dependable. Computer programming is an inexact art, and there is a huge need to quickly and easily get bug fixes, software updates, and even whole new program out to people. Users have to manually hunt down information, search out links, and monitor sites and information sources.

2. THE PUSH PROCESS

            For the end user, the process of receiving push content is quite simple. First, an individual subscribes to a publisher's site or channel by providing the content preferences. The subscriber also sets up a schedule specifying when information should be delivered. Based on the subscriber's schedule, the PC connects to the internet, and the client software notifies the publisher's server that the download can occur. The server collates the content pertaining to the subscriber's profile and downloads it to the subscriber's machine, after which the content is available for the subscriber's viewing

WORKING

            Interestingly enough, from a technical point of view, most push applications are pull and just appear to be 'push' to the user. In fact, a more accurate description of this process would be 'automated pull'.
The web currently requires the user to poll sites for new or updated information. This manual polling and downloading process is referred to as 'pull' technology. From a business point of view, this process provides little information about user, and even little control over what information is acquired. It is the user has to keep track of the location of the information sites, and the user has to continuously search for informational changes - a very time consuming process. The 'push' model alleviates much of this tedium.

CARBON NANOTUBE FLOW SENSORS --- ABSTRACT & SEMINARS

                                                CARBON NANOTUBE FLOW SENSORS

Introduction:

           Direct generation of measurable voltages and currents is possible when a fluids flows over a variety of solids even at the modest speed of a few meters per second. In case of gases underlying mechanism is an interesting interplay of Bernoulli's principle and the See beck effect: Pressure differences along streamlines give rise to temperature differences across the sample; these in turn produce the measured voltage. The electrical signal is quadratically dependent on the Mach number M and proportional to the Seebeck coefficient of the solids. 

            This discovery was made by professor Ajay sood and his student Shankar Gosh of IISC Bangalore, they had previously discovered that the flow of liquids, even at low speeds ranging from 10 -1 meter/second to 10 -7 m/s (that is, over six orders of magnitude), through bundles of atomic-scale straw-like tubes of carbon known as nanotubes, generated tens of micro volts across the tubes in the direction of the flow of the liquid. Results of experiment done by Professor Sood and Ghosh show that gas flaw sensors and energy conversion devices can be constructed based on direct generation of electrical signals. The experiment was done on single wall carbon nanotubes (SWNT).These effect is not confined to nanotubes alone these are also observed in doped semiconductors and metals.

          The observed effect immediately suggests the following technology application, namely gas flow sensors to measure gas velocities from the electrical signal generated. Unlike the existing gas flow sensors, which are based on heat transfer mechanisms from an electrically heated sensor to the fluid, a device based on this newly discovered effect would be an active gas flow sensor that gives a direct electrical response to the gas flow. One of the possible applications can be in the field of aerodynamics; several local sensors could be mounted on the aircraft body or aerofoil to measure streamline velocities and the effect of drag forces. Energy conversion devices can be constructed based on direct generation of electrical signals i.e. if one is able to cascade millions these tubes electric energy can be produced.

          As the state of art moves towards the atomic scales, sensing presents a major hurdle. The discovery of carbon nanotubes by Sujio Iijima at NEC, Japan in 1991 has provided new channels towards this end. A carbon nanotube (CNT) is a sheet of graphene which has been rolled up and capped with fullerenes at the end. The nanotubes are exceptionally strong, have excellent thermal conductivity, are chemically inert and have interesting electronic properties which depend on its chirality. The main reason for the popularity of the CNTs is their unique properties. Nanotubes are very strong, mechanically robust, and have a high Young's modulus and aspect ratio. These properties have been studied experimentally as well as using numerical tools. Bandgap of CNTs is in the range of 0~100 meV, and hence they can behave as both metals and semiconductors.
          
             A lot of factors like the presence of a chemical species, mechanical deformation and magnetic field can cause significant changes in the band gap, which consequently affect the conductance of the CNTs. Its unique electronic properties coupled with its strong mechanical strength are exploited as various sensors. And now with the discovery of a new property of flow induced voltage exhibited by nanotubes discovered by two Indian scientists recently, has added another dimension to micro sensing devices.

CNT Electronic Properties
 
           Electrically CNTs are both semiconductor and metallic in nature which is determined by the type of nanotube, its chiral angle, diameter, relation between the tube indices etc. The electronic properties structure and properties is based on the two dimensional structure of Graphene. For instance if the tube indices, n and m, satisfies the condition n-m=3q where q is and integer it behaves as a metal. Metal, in the sense that it has zero band gap energy. But in case of armchair (where n=m) the Fermi level crosses i.e. the band gap energy merges. Otherwise it is expected the properties of tube will be that of semiconductor. 

Fluid Flow Through Carbon Nanotube
 
            Recently there has been extensive study on the effect of fluid flow through nanotubes, which is a part of an ongoing effort worldwide to have a representative in the microscopic nano-world of all the sensing elements in our present macroscopic world. Indian Institute of Science has a major contribution in this regard. It was theoretically predicted that flow of liquid medium would lead to generation of flow-induced voltage. This was experimentally established by two Indian scientist at IISc. Only effect of liquid was theoretically investigated and established experimentally, but effect of gas flow over nanotubes were not investigated, until A.K Sood and Shankar Ghosh of IISc investigated it experimentally and provided theoretical explanation for it.

           The same effect as in case of liquid was observed, but for entirely different reason. These results have interesting application in biotechnology and can be used in sensing application. Micro devices can be powered by exploiting these properties.

MESH RADIO

                                                               MESH RADIO:

Introduction:
 
         Governments are keen to encourage the roll-out of broadband interactive multimedia services to business and residential customers because they recognise the economic benefits of e-commerce, information and entertainment. Digital cable networks can provide a compelling combination of simultaneous services including broadcast TV, VOD, fast Internet and telephony. Residential customers are likely to be increasingly attracted to these bundles as the cost can be lower than for separate provision. Cable networks have therefore been implemented or upgraded to digital in many urban areas in the developed countries.

          ADSL has been developed by telcos to allow on-demand delivery via copper pairs. A bundle comparable to cable can be provided if ADSL is combined with PSTN telephony and satellite or terrestrial broadcast TV services but incumbant telcos have been slow to roll it out and 'unbundling' has not proved successful so far. Some telcos have been accused of restricting ADSL performance and keeping prices high to protect their existing business revenues. Prices have recently fallen but even now the ADSL (and SDSL) offerings are primarily targeted at provision of fast (but contended) Internet services for SME and SOHO customers. This slow progress (which is partly due to the unfavourable economic climate) has also allowed cable companies to move slowly.

          A significant proportion of customers in suburban and semi-rural areas will only be able to have ADSL at lower rates because of the attenuation caused by the longer copper drops. One solution is to take fibre out to street cabinets equipped for VDSL but this is expensive, even where ducts are already available.
Network operators and service providers are increasingly beset by a wave of technologies that could potentially close the gap between their fibre trunk networks and a client base that is all too anxious for the industry to accelerate the rollout of broadband. While the established vendors of copper-based DSL and fibre-based cable are finding new business, many start-up operators, discouraged by the high cost of entry into wired markets, have been looking to evolving wireless radio and laser options.
          
          One relatively late entrant into this competitive mire is mesh radio, a technology that has quietly emerged to become a potential holder of the title 'next big thing'. Mesh Radio is a new approach to Broadband Fixed Wireless Access (BFWA) that avoids the limitations of point to multi-point delivery. It could provide a cheaper '3rd Way' to implement residential broadband that is also independent of any existing network operator or service provider. 

         Instead of connecting each subscriber individually to a central provider, each is linked to several other subscribers nearby by low-power radio transmitters; these in turn are connected to others, forming a network, or mesh, of radio interconnections that at some point links back to the central transmitter.

ANTHROPORMORPHIC ROBOT HAND -- ABSTRACT & SEMINAS

                                       ANTHROPOMORPHIC ROBOT HAND   
                 

            This paper presents an anthropomorphic robot hand called the Gifu hand II, which has a thumb and four fingers, all the joints of which are driven by servomotors built into the fingers and the palm. The thumb has four joints with four-degrees-of-freedom (DOF); the other fingers have four joints with 3-DOF; and two axes of the joints near the palm cross orthogonally at one point, as is the case in the human hand. The Gifu hand II can be equipped with six-axes force sensor at each fingertip and a developed distributed tactile sensor with 624 detecting points on its surface. The design concepts and the specifications of the Gifu hand II, the basic characteristics of the tactile sensor, and the pressure distributions at the time of object grasping are described and discussed herein. Our results demonstrate that the Gifu hand II has a high potential to perform dexterous object manipulations like the human hand.

INTRODUCTION

            IT IS HIGHLY expected that forthcoming humanoid robots will execute various complicated tasks via communication with a human user. The humanoid robots will be equipped with anthropomorphic multifingered hands very much like the human hand. We call this a humanoid hand robot. Humanoid hand robots will eventually supplant human labor in the execution of intricate and dangerous tasks in areas such as manufacturing, space, the seabed, and so on. Further, the anthropomorphic hand will be provided as a prosthetic application for handicapped individuals.

           Many multifingered robot hands (e.g., the Stanford-JPL hand by Salisbury et al.(1), the Utah/MIT hand by Jacobsen et al. [2], the JPL four-fingered hand by Jau [3], and the Anthrobot hand by Kyriakopoulos et al. [4]) have been developed. These robot hands are driven by actuators that are located in a place remote from the robot hand frame and connected by tendon cables. The elasticity of the tendon cable causes inaccurate joint angle control, and the long wiring of tendon cables may obstruct the robot motion when the hand is attached to the tip of the robot arm. Moreover, these hands have been problematic commercial products, particularly in terms of maintenance, due to their mechanical complexity.

          To solve these problems, robot hands in which the actuators are built into the hand (e.g., the Belgrade/USC hand by Venkataraman et al. [5], the Omni hand by Rosheim [6], the NTU hand by Lin et al. [7], and the DLR's hand by Liu et al. [8]) have been developed. However, these hands present a problem in that their movement is unlike that of the human hand because the number of fingers and the number of joints in the fingers are insufficient. Recently, many reports on the use of the tactile sensor [9]-[13] have been presented, all of which attempted to realize adequate object manipulation involving contact with the finger and palm. The development of the hand, which combines a 6-axial force sensor attached at the fingertip and a distributed tactile sensor mounted on the hand surface, has been slight.

          Our group developed the Gifu hand I [14], [15], a five-fingered hand driven by built-in servomotors. We investigated the hand's potential, basing the platform of the study on dexterous grasping and manipulation of objects. Because it had a nonnegligible backlash in the gear transmission, we redesigned the anthropomorphic robot hand based on the finite element analysis to reduce the backlash and enhance the output torque. We call this version the Gifu hand II.

BLUE BRAIN ----- ABSTRACT & SEMINARS

                 
                                                              BLUE BRAIN

DEFINITION :

         Blue brain ” –The name of the world's first virtual brain. That means a machine that can function as human brain. Today scientists are in research to create an artificial brain that can think, response, take decision, and keep anything in memory. The main aim is to upload human brain into machine. So that man can think, take decision without any effort. After the death of the body, the virtual brain will act as the man .So, even after the death of a person we will not loose the knowledge, intelligence, personalities, feelings and memories of that man that can be used for the development of the human society.

        No one has ever understood the complexity of human brain. It is complex than any circuitry in the world. So, question may arise “Is it really possible to create a human brain?” The answer is “Yes”. Because what ever man has created today always he has followed the nature. When man does not have a device called computer, it was a big question for all .But today it is possible due to the technology. Technology is growing faster than every thing. IBM is now in research to create a virtual brain. It is called “Blue brain “.If possible, this would be the first virtual brain of the world.

NEWS: The EPFL Blue Gene was the 8th fastest supercomputer in the world 



How it is possible?
 
              First, it is helpful to describe the basic manners in which a person may be uploaded into a computer. Raymond Kurzweil recently provided an interesting paper on this topic. In it, he describes both invasive and noninvasive techniques. The most promising is the use of very small robots, or nanobots. These robots will be small enough to travel throughout our circulatory systems. Traveling into the spine and brain, they will be able to monitor the activity and structure of our central nervous system. They will be able to provide an interface with computers that is as close as our mind can be while we still reside in our biological form.
        Nanobots could also carefully scan the structure of our brain, providing a complete readout of the connections between each neuron. They would also record the current state of the brain. This information , when entered into a computer, could then continue to function as us. All that is required is a computer with large enough storage space and processing power. Is the pattern and state of neuron connections in our brain truly all that makes up our conscious selves? Many people believe firmly those we posses a soul, while some very technical people believe that quantum forces contribute to our awareness. But we have to now think technically. Note, however, that we need not know how the brain actually functions, to transfer it to a computer. We need only know the media and contents. The actual mystery of how we achieved consciousness in the first place, or how we maintain it, is a separate discussion.

Uploading human brain: 

           The uploading is possible by the use of small robots known as the Nanobots . These robots are small enough to travel through out our circulatory system. Traveling into the spine and brain, they will be able to monitor the activity and structure of our central nervous system. They will be able to provide an interface with computers that is as close as our mind can be while we still reside in our biological form. Nanobots could also carefully scan the structure of our brain, providing a complete readout of the connections. This information, when entered into a computer, could then continue to function as us. Thus the data stored in the entire brain will be uploaded into the computer.
        
        IBM, in partnership with scientists at Switzerland's Ecole Polytechnique Federale de Lausanne's (EPFL) Brain and Mind Institute will begin simulating the brain's biological systems and output the data as a working 3-dimensional model that will recreate the high-speed electro-chemical interactions that take place within the brain's interior. These include cognitive functions such as language, learning, perception and memory in addition to brain malfunction such as psychiatric disorders like depression and autism. From there, the modeling will expand to other regions of the brain and, if successful, shed light on the relationships between genetic, molecular and cognitive functions of the brain.
           
        The model brain can accurately echo the song of a South American sparrow. The bird sing by forcing air from their lungs past folds of tissue in the voice box. The electric impulses from the brain that force the lungs had been recorded and when the equivalent impulses were passed to the computer model of the lungs of the bird it begins to sing like the bird.
          
        In conclusion, we will be able to transfer ourselves into computers at some point. Most arguments against this outcome are seemingly easy to circumvent. They are either simple minded, or simply require further time for technology to increase. The only serious threats raised are also overcome as we note the combination of biological and digital technologies. 

       Sources :  http://www.Bluebrain.com, http://www.bbrws.org

Sunday 16 December 2012

HUMAN ROBOT INTERACTION ---- ABSTRACT & SEMINARS

                                  Abstract of Human-Robot Interaction

         A very important aspect in developing robots capa­ble of Human-Robot Interaction (HRI) is the research in natural, human-like communication, and subsequently, the development of a research platform with multiple HRI capabilities for evaluation. Besides a flexible dialog system and speech understanding, an an­thropomorphic appearance has the potential to support intuitive usage and understanding of a robot, e.g .. human-like facial ex­pressions and deictic gestures can as well be produced and also understood by the robot. As a consequence of our effort in creating an anthropomorphic appearance and to come close to a human-­human interaction model for a robot, we decided to use human-like sensors, i.e., two cameras and two microphones only, in analogy to human perceptual capabilities too.

              Despite the challenges resulting from these limits with respect to perception, a robust attention sys­tem for tracking and interacting with multiple persons simultane­ously in real time is presented. The tracking approach is sufficiently generic to work on robots with varying hardware, as long as stereo audio data and images of a video camera are available. To easily implement different interaction capabilities like deictic gestures, natural adaptive dialogs, and emotion awareness on the robot, we apply a modular integration approach utilizing XML-based data exchange. The paper focuses on our efforts to bring together dif­ferent interaction concepts and perception capabilities integrated on a humanoid robot to achieve comprehending human-oriented interaction.

Introduction of Human-Robot Interaction

            For face detection, a method originally developed by Viola and Jones for object detection is adopted. Their approach uses a cascade of simple rectangular features that allows a very efficient binary classification of image windows into either the face or non face class. This classification step is executed for different window positions and different scales to scan the com­plete image for faces. We apply the idea of a classification pyra­mid starting with very fast but weak classifiers to reject im­age parts that are certainly no faces. With increasing complexity of classifiers, the number of remaining image parts decreases. The training of the classifiers is based on the AdaBoost algo­rithm . Combining the weak classifiers iteratively to more stronger ones until the desired level of quality is achieved.


            As an extension to the frontal view detection proposed by Viola and Jones, we additionally classify the horizontal gazing direction of faces, as shown in Fig. 4, by using four instances of the classifier pyramids described earlier, trained for faces rotated by 20", 40", 60", and 80". For classifying left and right-turned faces, the image is mirrored at its vertical axis, and the same four classifiers are applied again. The gazing direction is evaluated for activating or deactivating the speech processing, since the robot should not react to people talking to each other in front of the robot, but only to communication partners facing the robot. Subsequent to the face detection, a face identification is applied to the detected image region using the eigenface method to compare the detected face with a set of trained faces. For each detected face, the size, center coordinates, horizontal rotation, and results of the face identification are provided at a real-time capable frequency of about 7 Hz on an Athlon64 2 GHz desktop PC with I GB RAM.
 
Voice Detection:

            As mentioned before, the limited field-of-view of the cam­eras demands for alternative detect ion and tracking methods. Motivated by human perception, sound location is applied to direct the robot's attention. The integrated speaker localization (SPLOC) realizes both the detection of possible communication partners outside the field-of-view of the camera and the esti­mation whether a person found by face detection is currently speaking. The program continuously captures the audio data by the two microphones.
         
          To estimate the relative direction of one or more sound sources in front of the robot, the direction of sound toward the microphones is considered . De­pendent on the position of a sound source in front of the robot, the run time difference t results from the run times tr and tl of the right and left microphone. SPLOC compares the recorded audio signal of the left and the right] microphone using a fixed number of samples for a cross power spectrum phase (CSP) to calcu­late the temporal shift between the signals. Taking the distance of the microphones dmic and a minimum range of 30 cm to a sound source into account, it is possible to estimate the direction of a signal in a 2-D space. For multiple sound source detection, not only the main energy value for the CSP result is taken, but also all values exceeding an adjustable threshold.
          In the 3-D space, distance and height of a sound source is needed for an exact detection.

          This information can be obtained by the face detection when SPLOC is used for checking whether a found person is speaking or not. For coarsely detecting communication partner, outside the field-of-view, standard values are used that are sufficiently accurate to align the camera properly to get the person hypothesis into the field-of-view. The position of a sound source (a speaker mouth) is assumed at a height of 160 Cm for an average adult. The standard distance is adjusted to 110 Cm, as observed during interactions with naive users.

MOBILE IPv6 ----- ABSTRACT & SEMINARS

                                                                     MOBILE IPv6
      

         Mobile IP is the IETF proposed standard solution for handling terminal mobility among IP subnets and was designed to allow a host to change its point of attachment transparently to an IP network. Mobile IP works at the network layer, influencing the routing of datagrams, and can easily handle mobility among different media (LAN, WLAN, dial-up links, wireless channels, etc.). Mobile IPv6 is a protocol being developed by the Mobile IP Working Group (abbreviated as MIP WG) of the IETF (Internet Engineering Task Force).
 
           The intention of Mobile IPv6 is to provide a functionality for handling the terminal, or node, mobility between IPv6 subnets. Thus, the protocol was designed to allow a node to change its point of attachment to the IP network such a way that the change does not affect the addressability and reachability of the node. Mobile IP was originally defined for IPv4, before IPv6 existed. MIPv6 is currently becoming a standard due to inherent advantages of IPv6 over IPv4 and will therefore be ready soon for adoption in 3G Mobile networks. Mobile IPv6 is a highly feasible mechanism for implementing static IPv6 addressing for mobile terminals. Mobility signaling and security features (IPsec) are integrated in the IPv6 protocol as header extensions.

LIMITATIONS OF IPv4

The current version of IP (known as version 4 or IPv4) has not changed substantially since RFC 791, which was published in 1981. IPv4 has proven to be robust, and easily implemented and interoperable. It has stood up to the test of scaling an internetwork to a global utility the size of today's Internet. This is a tribute to its initial design.

However, the initial design of IPv4 did not anticipate:

" The recent exponential growth of the Internet and the impending exhaustion of the IPv4 address space 

Although the 32-bit address space of IPv4 allows for 4,294,967,296 addresses, previous and current allocation practices limit the number of public IP addresses to a few hundred million. As a result, IPv4 addresses have become relatively scarce, forcing some organizations to use a Network Address Translator (NAT) to map a single public IP address to multiple private IP addresses.

" The growth of the Internet and the ability of Internet backbone routers to maintain large routing tables

   Because of the way that IPv4 network IDs have been (and are currently) allocated, there are routinely over 85,000 routes in the routing tables of Internet backbone routers today.
 
" The need for simpler configuration

Most current IPv4 implementations must be either manually configured or use a stateful address configuration protocol such as Dynamic Host Configuration Protocol (DHCP). With more computers and devices using IP, there is a need for a simpler and more automatic configuration of addresses and other configuration settings that do not rely on the administration of a DHCP infrastructure.

" The requirement for security at the IP level Private communication over a public medium like the Internet requires cryptographic services that protect the data being sent from being viewed or modified in transit. Although a standard now exists for providing security for IPv4 packets (known as Internet Protocol Security, or IPSec), this standard is optional for IPv4 and proprietary security solutions are prevalent.
 
" The need for better support for real-time delivery of data-also called quality of service (QoS)