Monday, 31 December 2012

FACE RECOGNITION ---- ABSTRACT AND SEMINARS



                                      Abstract of Face Recognition Using Neural Network

A neural network is a powerful data modeling tool that is able to capture and represent complex input/output relationships . In the broader sense, a neural network is a collection of mathematical models that emulate some of the observed properties of biological nervous systems and draw on the analogies of adaptive biological learning. It is composed of a large number of highly interconnected processing elements that are analogous to neurons and are tied together with weighted connections that are analogous to synapses.

To be more  clear, let us study the model of a neural network with the help of figure.1. The most common neural network model is the multilayer perceptron (MLP). It is composed of hierarchical layers of neurons arranged so that information flows from the input layer to the output layer of the network. The goal of this type of network is to create a model that correctly maps the input to the output using historical data so that the model can then be used to produce the output when the desired output is unknown.

Neural network is a sequence of neuron layers. A neuron is a building block of a neural net. It is very loosely based on the brain's nerve cell. Neurons will receive inputs via weighted links from other neurons. This inputs will be processed according to the neurons activation function. Signals are then passed on to other neurons.
In a more practical way, neural networks are made up of interconnected processing elements called units which are equivalent to the brains counterpart ,the neurons.

Neural network can be considered as an artificial system that could perform "intelligent" tasks similar to those performed by the human brain. 
Neural networks resemble the human brain in the following ways:

1. A neural network acquires knowledge through learning.

2. A neural network's knowledge is stored within inter-neuron connection strengths known as synaptic weights.
3. Neural networks modify own topology just as neurons in the brain can die and new synaptic connections grow.

Why we choose face recognition over other biometric?

There are a number reasons to choose face recognition. This includes the following :

1.     It requires no physical inetraction on behalf of the user.
2.      It is accurate and allows for high enrolment and verification rates.
3.      It does not require an expert to interpret the comparison result.
4.      It can use your existing hardware infrastructure, existing camaras and image capture devices will work with no problems.
5.      It is the only biometric that allow you to perform passive identification in a one to many environment (eg: identifying a terrorist in a busy Airport terminal.

The face is an important part of who you are and how people identify you. Except in the case of identical twins, the face is arguably a person's most unique physical characteristics. While humans have the innate ability to recognize and distinguish different faces for millions of years , computers are just now catching up. For face recognition there are two types of comparisons .the first is verification. This is where the system compares the given individual with who that individual says they are and gives a yes or no decision. The second is identification. This is where the system compares the given individual to all the other individuals in the database and gives a ranked list of matches. All identification or authentication technologies operate using the following four stages:

1. capture: a physical or behavioural sample is captured by the system during enrollment and also in identification or verification process.
2. Extraction: unique data is extracted from the sample and a template is created.
3. Comparison: the template is then compared with a new sample.
4. Match/non match : the system decides if the features extracted from the new sample are a match or a non match.

Face recognition starts with a picture, attempting to find a person in the image. This can be accomplished using several methods including movement, skin tones, or blurred human shapes. The face recognition system locates the head and finally the eyes of the individual. A matrix is then developed based on the characteristics of the individual’s face. The method of defining the matrix varies according to the algorithm (the mathematical process used by the computer to perform the comparison). This matrix is then compared to matrices that are in a database and a similarity score is generated for each comparison.

Artificial intelligence is used to simulate human interpretation of faces. In order to increase the accuracy and adaptability, some kind of machine learning has to be implemented.

There are essentially two methods of capture. One is video imaging and the other is thermal imaging. Video imaging is more common as standard video cameras can be used. The precise position and the angle of the head and the surrounding lighting conditions may affect the system performance. The complete facial image is usually captured and a number of points on the face can then be mapped, position of the eyes, mouth and the nostrils as a example. More advanced technologies make 3-D map of the face which multiplies the possible measurements that can be made. 

Thermal imaging has better accuracy as it uses facial temperature variations caused by vein structure as the distinguishing traits. As the heat pattern is emitted from the face itself without source of external radiation these systems can capture images despite the lighting condition, even in the dark. The drawback is high cost. They are more expensive than standard video cameras. 

Face recognition technologies have been associated generally with very costly top secure applications. Today the core technologies have evolved and the cost of equipments is going down dramatically due to the intergration and the increasing processing power. Certain application of face recognition technology are now cost effective, reliable and highly accurate. As a result there are no technological or financial barriers for stepping from the pilot project to widespread deployment.

   TO DOWNLOAD THIS ABSTRACT CLICK ON THE BELOW LINK

                                    http://www.ziddu.com/download/21240160/facerecog.docx.html

 References 

1. ELECTRONICS FOR YOU- Part 1 April 2001 & Part 2 May 2001
2. ELECTRONIC WORLD - DECEMBER 2002
3. MODERN TELEVISION ENGINEERING- Gulati R.R
4. IEEE IN TELLIGENT SYS TEMS - MAY/JUNE 2003
5. WWW.FACEREG.COM
6. WWW. IMAGESTECHNOLOGY.COM
7. WWW.IEEE.COM

Tuesday, 18 December 2012

CELLULAR POSITIONING --- ABSTRACT & SEMINARS

                                                   CELLULAR POSITIONING

Introduction:

          Location related products are the next major class of value added services that mobile network operators can offer their customers. Not only will operators be able to offer entirely new services to customers, but they will also be able to offer improvements on current services such as location-based prepaid or information services. The deployment of location based services is being spurred by several factors:

Competition :
 
            The need to find new revenue enhancing and differentiating value added services has been increasing and will continue to increase over time. Regulation The Federal Communications Commission (FCC) of the USA adopted a ruling in June 1996 (Docket no. 94-102) that requires all mobile network operators to provide location information on all calls to "911", the emergency services. The FCC mandated that by 1 st October 2001, all wireless 911 calls must be pinpointed within125 meters, 67% of the time. On December 24 1998, the FCC amended its ruling to allow terminal based solutions as well as network based ones (CC Docket No. 94-102, Waivers for Handset-Based Approaches). There are a number of regulations that location based services must comply with, not least of all to protect the privacy of the user. Mobile Streams believes that it is essential to comply with all such regulations fully. However, such regulations are only the starting point for such services- there are possibilities for a high degree of innovation in this new market that should not be overlooked.

 Technology
        There have been continuous improvements in handset, network and positioning technologies. For example, in 1999, Benefon, a Finnish GSM and NMT terminal vendor launched the ESC! GSM/ GPS mapping phone.

Needs Of Cellular Positioning:
 
            There are a number of reasons why it is useful to be able to pinpoint the position of a mobile telephone, some of which are described below. Location-Sensitive billing Different tariff can be provided depending upon the position of the cell phone. This allows the operator without a copper cable based PSTN to offer competitive rates for calls from home or office. Increased subscriber safety. A significant number of emergency calls like US.911 are coming from cell phones, and in most of the cases the caller can not provide the accurate information about their position. As a real life example let us take the following incident. In February 1997 a person became stranded along a highway during a winter blizzard (Associated press 1997).She used her cellular phone to call for help but could not provide her location due to white-out conditions. To identify the callers approximate position authorities asked her to tell them when she could hear the search plane flying above. From the time of her first call forty hours elapsed before a ground rescue team reached her. An automatic positioning system would have allowed rescuers to reach her far sooner.

Positioning Techniques :
 
            There are a variety of ways in which position can be derived from the measurement of signals and these can be applied to any cellular system including GSM. The important measurements are the Time of Arrival (TOA), the Time Difference of Arrival (TDOA), the Angle of Arrival (AOA) and Carrier phase. All these measurement put the object to be positioned on a particular locus. Multiple measurements give multiple loci and the point of their intersection gives the position. If the density of the base stations is such that more measurements can be done than required then a least square approach can be used. If the measurements are too few in number the loci will intersect at more than one point result in ambiguous position estimate. In the following discussion we assume that the mobile station and base station are lying in the same plane. This is approximately true for most networks unless the geography include hilly topology or high rise buildings.

Time of Arrival (TOA):
 
           In a remote positioning system this involves the measurement of the propagation time of a signal from the mobile phone to a base station. Each measurement fixes the position of the mobile on a circle. With two stations there will be two circle and they can intersect in a maximum of two points. This gives rise to an ambiguity and it is resolved by including a priory information of the trajectory of the mobile phone or making a propagation time measurement to a third base station.
           The TOA measurement requires exact time synchronization between the base stations and the receiver should have an accurate clock, so that the receiver knows the exact time of transmission and an exact TOA measurement have made by the receiver.

Monday, 17 December 2012

NON VISIBLE IMAGING ---- ABSTRACT & SEMINARS

                                                              NON VISIBLE IMAGING


             Near infrared light consists of light just beyond visible red light (wavelengths greater than 780nm). Contrary to popular thought, near infrared photography does not allow the recording of thermal radiation (heat). Far-infrared thermal imaging requires more specialized equipment. Infrared images exhibit a few distinct effects that give them an exotic, antique look. Plant life looks completely white because it reflects almost all infrared light (because of this effect, infrared photography is commonly used in aerial photography to analyze crop yields, pest control, etc.) The sky is a stark black because no infrared light is scattered. Human skin looks pale and ghostly. Dark sunglasses all but disappear in infrared because they don't block any infrared light, and it's said that you can capture the near infrared emissions of a common iron.
            
             Infrared photography has been around for at least 70 years, but until recently has not been easily accessible to those not versed in traditional photographic processes. Since the charge-coupled devices (CCDs) used in digital cameras and camcorders are sensitive to near-infrared light, they can be used to capture infrared photos. With a filter that blocks out all visible light (also frequently called a "cold mirror" filter), most modern digital cameras and camcorders can capture photographs in infrared. In addition, they have LCD screens, which can be used to preview the resulting image in real-time, a tool unavailable in traditional photography without using filters that allow some visible (red) light through.

INTRODUCTION:

             Near infrared light consists of light just beyond visible red light (wavelengths greater than 780nm). Contrary to popular thought, near infrared photography does not allow the recording of thermal radiation (heat). Far-infrared thermal imaging requires more specialized equipment. Infrared images exhibit a few distinct effects that give them an exotic, antique look. Plant life looks completely white because it reflects almost all infrared light (because of this effect, infrared photography is commonly used in aerial photography to analyze crop yields, pest control, etc.) The sky is a stark black because no infrared light is scattered. 

            Human skin looks pale and ghostly. Dark sunglasses all but disappear in infrared because they don't block any infrared light, and it's said that you can capture the near infrared emissions of a common iron

             Near-infrared (1000 - 3000 nm) spectrometry, which employs an external light source for determination of chemical composition, has been previously utilized for industrial determination of the fat content of commercial meat products, for in vivo determination of body fat, and in our laboratories for determination of lipoprotein composition in carotid artery atherosclerotic plaques. Near-infrared (IR) spectrometry has been used industrially for several years to determine saturation of unsaturated fatty acid esters (1). Near-IR spectrometry uses an tunable light source external to the experimental subject to determine its chemical composition.

           Industrial utilization of near-IR will allow for the in vivo measurement of the tissue-specific rate of oxygen utilization as an indirect estimate of energy expenditure. However, assessment of regional oxygen consumption by these methods is complex, requiring a high level of surgical skill for implantation of indwelling catheters to isolate the organ under study.

NUCLEAR BATTERIES -DAINTIEST DYNAMICS

                                      NUCLEAR BATTERIES -DAINTIEST DYNAMICS 


           Micro electro mechanical systems (MEMS) comprise a rapidly expanding research field with potential applications varying from sensors in air bags, wrist-warn GPS receivers, and matchbox size digital cameras to more recent optical applications. Depending on the application, these devices often require an on board power source for remote operation, especially in cases requiring for an extended period of time. In the quest to boost micro scale power generation several groups have turn their efforts to well known enable sources, namely hydrogen and hydrocarbon fuels such as propane, methane, gasoline and diesel. Some groups are developing micro fuel cells than, like their micro scale counter parts, consume hydrogen to produce electricity. Others are developing on-chip combustion engines, which actually burn a fuel like gasoline to drive a minuscule electric generator. But all these approaches have some difficulties regarding low energy densities, elimination of by products, down scaling and recharging. All these difficulties can be overcome up to a large extend by the use of nuclear micro batteries.
           
           Radioisotope thermo electric generators (RTGs) exploited the extraordinary potential of radioactive materials for generating electricity. RTGs are particularly used for generating electricity in space missions. It uses a process known as See-beck effect. The problem with RTGs is that RTGs don't scale down well. So the scientists had to find some other ways of converting nuclear energy into electric energy. They have succeeded by developing nuclear batteries.

NUCLEAR BATTERIES

            Nuclear batteries use the incredible amount of energy released naturally by tiny bits of radio active material without any fission or fusion taking place inside the battery. These devices use thin radioactive films that pack in energy at densities thousands of times greater than those of lithium-ion batteries. Because of the high energy density nuclear batteries are extremely small in size. Considering the small size and shape of the battery the scientists who developed that battery fancifully call it as "DAINTIEST DYNAMO". The word 'dainty' means pretty.

            Scientists have developed two types of micro nuclear batteries. One is junction type battery and the other is self-reciprocating cantilever. The operations of both are explained below one by one.

 JUNCTION TYPE BATTERY

           The kind of nuclear batteries directly converts the high-energy particles emitted by a radioactive source into an electric current. The device consists of a small quantity of Ni-63 placed near an ordinary silicon p-n junction - a diode, basically.

WORKING:

            As the Ni-63 decays it emits beta particles, which are high-energy electrons that spontaneously fly out of the radioisotope's unstable nucleus. The emitted beta particles ionized the diode's atoms, exciting unpaired electrons and holes that are separated at the vicinity of the p-n interface. These separated electrons and holes streamed away form the junction, producing current.

PUSH TECHNOLOGY

                                                              PUSH TECHNOLOGY

        
        Push technology reverses the Internet's content delivery model. Before push, content publishers had to reply upon the end-users own initiative to bring them to a web site or download content. With push technology the publisher can deliver a content directly to the users PC, thus substantially improving the likelihood that the user will view it. Push content can be extremely timely, and delivered fresh several times a day. Information keeps coming to user whatever he asked for it or not. The most common analog for push technology is a TV channel; it keeps sending us stuff whether we care about it or not.
            
            Push was created to alleviate two problems facing users of net. The first problem is information overload. The volume and dynamic nature of content on the internet is a impediment to users, and has become an ease-of -use of issue. Without push applications can be tedious, time consuming, and less than dependable. Users have to manually hunt down information, search out links, and monitor sites and information sources. Push applications and technology building blocks narrow that focus even further and add considerable ease of use. The second problem is that most end-users are restricted to low bandwidth internet connections, such as 33.3 kbps modems, thus making it difficult to receive multimedia content. Push technology provides means to pre-deliver much larger packages of content.
             
            Push technology enables the delivery of multimedia content on the internet through the use of local storage and transparent content downloads. Like a faithful delivery agent, push, often referred to as broadcasting, delivers content directly to user transparently and automatically. It is one of the internet's most promising technologies.

             Already a success, push is being used to pump data in the form of news, current affairs and sports etc, to many computers connected to the internet.Updating software is one of the fastest growing uses of push. It is a new and exciting way to manage software update and upgrade hassles. Using the internet today without the aid of a push application can be a tedious, time consuming, and less than dependable. Computer programming is an inexact art, and there is a huge need to quickly and easily get bug fixes, software updates, and even whole new program out to people. Users have to manually hunt down information, search out links, and monitor sites and information sources.

2. THE PUSH PROCESS

            For the end user, the process of receiving push content is quite simple. First, an individual subscribes to a publisher's site or channel by providing the content preferences. The subscriber also sets up a schedule specifying when information should be delivered. Based on the subscriber's schedule, the PC connects to the internet, and the client software notifies the publisher's server that the download can occur. The server collates the content pertaining to the subscriber's profile and downloads it to the subscriber's machine, after which the content is available for the subscriber's viewing

WORKING

            Interestingly enough, from a technical point of view, most push applications are pull and just appear to be 'push' to the user. In fact, a more accurate description of this process would be 'automated pull'.
The web currently requires the user to poll sites for new or updated information. This manual polling and downloading process is referred to as 'pull' technology. From a business point of view, this process provides little information about user, and even little control over what information is acquired. It is the user has to keep track of the location of the information sites, and the user has to continuously search for informational changes - a very time consuming process. The 'push' model alleviates much of this tedium.

CARBON NANOTUBE FLOW SENSORS --- ABSTRACT & SEMINARS

                                                CARBON NANOTUBE FLOW SENSORS

Introduction:

           Direct generation of measurable voltages and currents is possible when a fluids flows over a variety of solids even at the modest speed of a few meters per second. In case of gases underlying mechanism is an interesting interplay of Bernoulli's principle and the See beck effect: Pressure differences along streamlines give rise to temperature differences across the sample; these in turn produce the measured voltage. The electrical signal is quadratically dependent on the Mach number M and proportional to the Seebeck coefficient of the solids. 

            This discovery was made by professor Ajay sood and his student Shankar Gosh of IISC Bangalore, they had previously discovered that the flow of liquids, even at low speeds ranging from 10 -1 meter/second to 10 -7 m/s (that is, over six orders of magnitude), through bundles of atomic-scale straw-like tubes of carbon known as nanotubes, generated tens of micro volts across the tubes in the direction of the flow of the liquid. Results of experiment done by Professor Sood and Ghosh show that gas flaw sensors and energy conversion devices can be constructed based on direct generation of electrical signals. The experiment was done on single wall carbon nanotubes (SWNT).These effect is not confined to nanotubes alone these are also observed in doped semiconductors and metals.

          The observed effect immediately suggests the following technology application, namely gas flow sensors to measure gas velocities from the electrical signal generated. Unlike the existing gas flow sensors, which are based on heat transfer mechanisms from an electrically heated sensor to the fluid, a device based on this newly discovered effect would be an active gas flow sensor that gives a direct electrical response to the gas flow. One of the possible applications can be in the field of aerodynamics; several local sensors could be mounted on the aircraft body or aerofoil to measure streamline velocities and the effect of drag forces. Energy conversion devices can be constructed based on direct generation of electrical signals i.e. if one is able to cascade millions these tubes electric energy can be produced.

          As the state of art moves towards the atomic scales, sensing presents a major hurdle. The discovery of carbon nanotubes by Sujio Iijima at NEC, Japan in 1991 has provided new channels towards this end. A carbon nanotube (CNT) is a sheet of graphene which has been rolled up and capped with fullerenes at the end. The nanotubes are exceptionally strong, have excellent thermal conductivity, are chemically inert and have interesting electronic properties which depend on its chirality. The main reason for the popularity of the CNTs is their unique properties. Nanotubes are very strong, mechanically robust, and have a high Young's modulus and aspect ratio. These properties have been studied experimentally as well as using numerical tools. Bandgap of CNTs is in the range of 0~100 meV, and hence they can behave as both metals and semiconductors.
          
             A lot of factors like the presence of a chemical species, mechanical deformation and magnetic field can cause significant changes in the band gap, which consequently affect the conductance of the CNTs. Its unique electronic properties coupled with its strong mechanical strength are exploited as various sensors. And now with the discovery of a new property of flow induced voltage exhibited by nanotubes discovered by two Indian scientists recently, has added another dimension to micro sensing devices.

CNT Electronic Properties
 
           Electrically CNTs are both semiconductor and metallic in nature which is determined by the type of nanotube, its chiral angle, diameter, relation between the tube indices etc. The electronic properties structure and properties is based on the two dimensional structure of Graphene. For instance if the tube indices, n and m, satisfies the condition n-m=3q where q is and integer it behaves as a metal. Metal, in the sense that it has zero band gap energy. But in case of armchair (where n=m) the Fermi level crosses i.e. the band gap energy merges. Otherwise it is expected the properties of tube will be that of semiconductor. 

Fluid Flow Through Carbon Nanotube
 
            Recently there has been extensive study on the effect of fluid flow through nanotubes, which is a part of an ongoing effort worldwide to have a representative in the microscopic nano-world of all the sensing elements in our present macroscopic world. Indian Institute of Science has a major contribution in this regard. It was theoretically predicted that flow of liquid medium would lead to generation of flow-induced voltage. This was experimentally established by two Indian scientist at IISc. Only effect of liquid was theoretically investigated and established experimentally, but effect of gas flow over nanotubes were not investigated, until A.K Sood and Shankar Ghosh of IISc investigated it experimentally and provided theoretical explanation for it.

           The same effect as in case of liquid was observed, but for entirely different reason. These results have interesting application in biotechnology and can be used in sensing application. Micro devices can be powered by exploiting these properties.

MESH RADIO

                                                               MESH RADIO:

Introduction:
 
         Governments are keen to encourage the roll-out of broadband interactive multimedia services to business and residential customers because they recognise the economic benefits of e-commerce, information and entertainment. Digital cable networks can provide a compelling combination of simultaneous services including broadcast TV, VOD, fast Internet and telephony. Residential customers are likely to be increasingly attracted to these bundles as the cost can be lower than for separate provision. Cable networks have therefore been implemented or upgraded to digital in many urban areas in the developed countries.

          ADSL has been developed by telcos to allow on-demand delivery via copper pairs. A bundle comparable to cable can be provided if ADSL is combined with PSTN telephony and satellite or terrestrial broadcast TV services but incumbant telcos have been slow to roll it out and 'unbundling' has not proved successful so far. Some telcos have been accused of restricting ADSL performance and keeping prices high to protect their existing business revenues. Prices have recently fallen but even now the ADSL (and SDSL) offerings are primarily targeted at provision of fast (but contended) Internet services for SME and SOHO customers. This slow progress (which is partly due to the unfavourable economic climate) has also allowed cable companies to move slowly.

          A significant proportion of customers in suburban and semi-rural areas will only be able to have ADSL at lower rates because of the attenuation caused by the longer copper drops. One solution is to take fibre out to street cabinets equipped for VDSL but this is expensive, even where ducts are already available.
Network operators and service providers are increasingly beset by a wave of technologies that could potentially close the gap between their fibre trunk networks and a client base that is all too anxious for the industry to accelerate the rollout of broadband. While the established vendors of copper-based DSL and fibre-based cable are finding new business, many start-up operators, discouraged by the high cost of entry into wired markets, have been looking to evolving wireless radio and laser options.
          
          One relatively late entrant into this competitive mire is mesh radio, a technology that has quietly emerged to become a potential holder of the title 'next big thing'. Mesh Radio is a new approach to Broadband Fixed Wireless Access (BFWA) that avoids the limitations of point to multi-point delivery. It could provide a cheaper '3rd Way' to implement residential broadband that is also independent of any existing network operator or service provider. 

         Instead of connecting each subscriber individually to a central provider, each is linked to several other subscribers nearby by low-power radio transmitters; these in turn are connected to others, forming a network, or mesh, of radio interconnections that at some point links back to the central transmitter.