What
       lies beyond Gigabit Ethernet? Will IPv6 go mainstream anytime    
   soon? How can companies reduce the wastage of computing resources    
   in server farms and data centres? Prashant L Rao peers into       a 
crystal ball and tells you what the future holds for networking 
Ethernet
       has come a long way since it was first implemented in the       
1970s. In its 30 years of existence it has become ubiquitous;       a 
widely standardised, plug-and-play technology that is used       in over
 90 percent of corporate LANs. Ethernet originally       ran over thick 
co-ax and provided users with a shared 10 Mbps       bandwidth 
connection. It soon progressed to running over unshielded       twisted 
pair and offering dedicated 10 Mbps connections using       switches. 
Today, switched Fast Ethernet enables dedicated       100 Mbps to the 
desktop with 1 Gbps trunks.
       Gigabit Ethernet
Both 10 Mbps and 100 Mbps Ethernet have evolved over the last 20 years to become the local networking standard because of their reliability, low cost and ease of installation. However, streaming multimedia applications are becoming commonplace in the industry, and 100 Mbps doesn’t quite cut it for all applications. That’s where Gigabit Ethernet comes into the picture. With Fast Ethernet you had 10/100 switches; in the case of Gigabit you have 10/100/1000 switches. 10 Gigabit Ethernet is currently in development. This technology will work only with fibre, unlike Gigabit that also works over copper.
 
      Enterprises have been using routers to connect to WANs through    
   E1 pipes (2 Mbps). Analysts predict that while Ethernet switches     
  are capable of delivering 1 Gbps, routers that support a maximum      
 speed of 45 Mbps could become the main bottleneck on enterprise       
WANs. Some even predict that routers may not be used in future.
       The physical performance of Gigabit Ethernet is most often       
compared to ATM, but the debate continues on Gigabit Ethernet       
taking on duties formerly handled by ATM. Gigabit Ethernet       has 
made substantial strides towards ATM capabilities with       the 
introduction of frame-based quality-of-service (QoS) features       and 
IP switching.
Beyond       Gigabit
“10 Gbps is very close to memory speed. Putting 10 Gbps won’t improve performance for that reason. 10 Gbps will be used to consolidate bigger networks,” says Pramod S, systems specialist at Apara. S Vishwanathan, S E manager at Wipro Infotech says, “The chances of any other technology replacing Ethernet are dim.” There’s a roadmap all the way from Gigabit to 10 Gbps to 100 Gbps. But as Vishwanathan points out, “No application demands this kind of bandwidth—plus you can have multiple links on 10 Gbps.”
       Even Gigabit Ethernet is just 
barely present in the desktop       segment. Vishwanathan estimates that
 barely 1-2 percent of       the Indian market is using Gigabit on the 
desktop. Within       a few years, industry experts predict 1 Gbps to 
the desktop       and 10 Gbps trunks.
Internet Protocol Version 6 (IPv6) has been designed to fix many shortcomings in today’s IP, IPv4. It has features such as automatic routing and network reconfiguration. Smaller packet headers also allow for faster processing (IPv6 headers have seven fields vs 13 in IPv4). IPv6 replaces IPv4’s 32-bit addressing with 128-bit addressing.
       IPv6 simplifies 
many operations done by the existing IP in       patched-up or 
afterthought implementations. Also, some of       the capabilities of 
IPv6 are simply not there in IPv4. However,       the cost factor in 
overhauling the existing IPv4-based Internet       infrastructure will 
delay the deployment of IPv6. Eventually       however, IPv6 will 
prevail.
Teevra       Bose, national product
 manager, NBU at Apara says, “IPv6       is still on the test-bed. The 
lack of IPs in IPv4 is not a       potential threat; people have learned
 how to work around the       problem.”
       VoIP
“In the corporate market, VoIP could overtake traditional telephony,” says Vishwanathan. There are a lot of quality standards that offer the highest priority to voice. “QoS has been taken care of,” he adds. Codecs exist that let you compress a voice channel to anything between 5 and 64 Kbps. 8 Kbps is the present standard. “The future of VoIP will parallel developments in IP technology. Everybody understands IP. The customer only has to maintain a single set-up. IP PBXs already support 60-70 percent of the functionality of a traditional PBX; it will take 3-5 years for IP PBXs to match the analog market,” concludes Vishwanathan.
       Cisco has first-hand experience in all this, having built       
the largest VoIP network in the world in China; it has grown       to 
transmit 500 million minutes a month.
       iSCSI
iSCSI has to be implemented on a Gigabit infrastructure; this is cheaper than a fibre channel network, but not as cheap or as easy as a 10/100 network. Says Pramod, “iSCSI competes with fibre channel as an access protocol. The number of users is low at present, but by the end of 2003 it should catch up. iSCSI’s limitation is that it is not a complete standard. However, by the end of 2002 or early 2003 it should be one. In iSCSI TCP/IP performance overheads are high; the solution is TCP/IP Offload Engines (TOE). Security is another issue; SCSI was designed for dedicated storage. With iSCSI you have data flowing over the WAN, so security is now a requirement.”
       
TOE hasn’t really caught on in the market. M S Sidhu,       managing 
director of Apara believes that iSCSI will only catch       on along 
with 10 G Ethernet networks. That said, iSCSI has       fundamental 
advantages: messaging (file) and storage (block)       data can be sent 
over the same wire. This simplifies SAN configurations.       The fact 
that iSCSI builds upon SCSI, TCP/IP and Ethernet       is expected to 
reduce the steepness of the adoption curve.
Unified       storage
“Unified storage is the ultimate goal,” says Banda Prasad, practice head for storage services at Wipro Infotech. Unified storage is defined as storage that supports SAN and NAS as well as emerging protocols such as iSCSI, either natively or through gateways. This new technology is being hailed as a panacea for all the ills of today’s storage networks.
     
  According to Gartner, on an average an enterprise spends $3       
managing storage for every $1 spent on storage hardware. Unified       
storage solutions eliminate constraints as they accept both       file 
and block requests simultaneously over the same wire.
       DAS and NAS offer limited scalability. Once a DAS or NAS 
installation       reaches its storage capacity limit, additional 
servers must       be deployed with their own islands of storage. SAN 
can scale,       but SANs are complex and costly. It is tough to add 
storage       in all the above technologies without some degree of 
service-disruption       occurring.
Unified 
      storage simplifies scalability without disruptions because       
it uses a common management layer that automatically reconfigures       
the underlying storage subsystem without administrative intervention.   
    Capacity is simply added to the pool of available storage       
resources. With unified storage, a single resource manages       
everything and automatically reconfigures new storage or hardware       
elements. The storage pool, accessed by multiple topologies,       can 
be scaled with significantly less management overheads       than with 
traditional technologies.
       Researchers
 estimate that 80 percent of today’s installed       storage 
infrastructure is DAS. Customers with heavy DAS infrastructure       
face a number of challenges: the nature of storage and access       is 
changing as digital images and video co-exist with database       
applications, and disk storage associated with a given DAS       server 
is accessible only through that specific server, thus       creating 
islands of data.
SANs       have a high 
entry price, require special equipment and management,       handle only
 block data, and are cumbersome to integrate with       existing 
infrastructures. NAS solutions are suited for file-only       
applications, have limited scalability, and are difficult       to 
manage and cluster.
       Unified storage 
offers a fresh approach to data storage. It       lets companies store 
file or raw block data directly on existing       IP networks. The raw 
block transfer capability of NUS solutions       enables distributed 
storage volumes to be seen as directly       attached disks on the 
network. These solutions are protocol-agnostic,       which enables them
 to integrate easily into existing heterogeneous       storage 
environments. In addition, they can be purchased incrementally,       
allowing customers to grow storage in a modular and scalable       
manner. A virtualised cluster of unified storage devices can       be 
partitioned and allocated as a raw block device running       in 
conjunction with an Oracle application, or a simple network-attached    
   file server.
       Virtualisation over a
 SAN is another way of fooling 30-40       different applications into 
thinking that a storage box is       dedicated to them. This is done by 
virtualisation of the switch.
       One expected development in enterprise storage is a self-bootable       device—essentially storage that starts itself up.
Experts believe that it will take 3-4 years before fibre to the desktop catches on. Cat6, which supports Gigabit Ethernet, will become the preferred means of cabling using copper. The migration from existing Cat5 to Cat6 will be a phased one. Only those users who need greater bandwidth, perhaps those using CAD or multimedia authoring tools, will benefit significantly from this technology. Changing to a Gigabit backbone doesn’t come cheap; the cost works out to around Rs 1 crore for changing switches and rewiring.
       Network management software
Experts feel that despite driving factors, it will be some time before the network management software (NMS) market in India matures to the level of commanding significant volumes. The cost of the software is currently on the higher side, restricting the market to larger enterprises; it will take another two years for NMS to make deeper inroads. Meanwhile, there is a thrust from the big three server vendors—IBM, HP and Sun—towards improving resource utilisation in data centres and server farms.
       N1
Sun’s N1 is an effort at letting customers assign computing tasks to pools of servers, storage systems and network equipment without needing to allocate jobs to individual pieces of hardware. At the same time, N1 has the goal of improving resource utilisation in data centres. N1 competes with HP’s Utility Data Centre and IBM’s autonomic computing projects. “N1 builds the computer out of the network,” says Anil Valluri, director of systems engineering at Sun Microsystems India.
       N1 will take 
advantage of virtualisation to let many computing       jobs be handled 
by a pool of computing equipment rather than       by a particular 
server. For this, Sun has acquired two start-ups,       Terraspring and 
Pirus Networks. Terraspring’s software       keeps track of computing 
equipment in a data centre and lets       administrators provision 
systems for new jobs. Pirus sells       hardware and software that 
virtualises storage systems from       various manufacturers.
       N1 will debut in Sun’s blade servers and Sun’s services       that help customers prepare for the technology.
IBM’s       autonomic computing
IBM’s initiative is a response to the problem of dealing with the consequences of the decade-long reign of Moore’s Law. Demand for skilled IT workers is expected to increase by over 100 percent in the next six years, and to help cope with that comes the idea of relieving humans of the burden of coping with computing complexity and passing it back to the computers. Say Big Blue’s researchers, “We need to develop autonomic computer systems that self-regulate, self-repair and respond to changing conditions without conscious effort on our part.” IBM is investing $2 billion into developing autonomic systems.
  
     Autonomic computing aims to introduce self-configuring, 
self-healing,       self-optimising and self-protecting capabilities in 
the whole       range of computing systems, from desktop computers to 
mainframes       to software. Its early efforts are focused on storage 
and       software (Tivoli, DB2 and WebSphere). Autonomic computing     
  capabilities are being built into IBM’s Shark product       line 
through software that lets users configure and manage       data across 
large server farms. IBM calls Tivoli Risk Manager       4.1 the first 
“autonomic security management software”       capable of automatically 
monitoring a network’s health,       protecting it from attacks, and healing it. 
Risk       
Manager’s ‘self-healing’ features include the       ability to integrate
 with software distribution tools, including       Tivoli Configuration 
Manager and similar third-party products.       This feature lets Risk 
Manager push out security patches and       software updates to devices 
under its management. WebSphere       5.0 server contains a number of 
self-optimising and self-healing       features. DB2 8 has a feature 
called ‘Health Center’       that automatically updates a database 
administrator on system       performance, offers advice about database 
problems or applications,       and sends alerts when a fix has been 
generated.
       HP Utility Data Centre
On an average, data centres have utilisation rates of 35 percent. Businesses would love to push that number up to, say, 75 percent, and reduce data centre costs without jeopardising service levels. That’s where HP UDC (self-adapting, self-healing and policy-driven) comes into the picture, allocating and managing computing power automatically. UDC supports multiple hardware vendors and operating systems, and lets users graphically assign servers to jobs as demand arises by booting servers in the data centre from a shared disk network that includes multiple operating systems and applications. The target segment? Telecom, financial services, managed service providers and infrastructure application service providers. The core of UDC is HP’s utility controller software that simplifies the design, allocation and billing of IT resources for applications and services.
       UDC allows data centre infrastructures to be wired once, then    
   provisioned virtually and managed on the fly-deploying new       
applications and services, activating new customers, offering       
usage-based billing.
       Internet2
The Internet2 project kicked off in 1996. The US government, colleges, universities and companies such as IBM, Lucent, Cisco, and Nortel are involved in this project. 185 universities and research labs are working on Internet2. The goal behind creating Internet2 was to deploy advanced network applications and technologies to speed up the creation of a faster Internet. Internet2 is not available to the public. It uses a high bandwidth backbone, but supports a much smaller number of users than the public Net, three million versus several hundred million. Internet2 is fast; the slowest connection on Internet2 is 155 Mbps. It is designed to minimise the number of hops between routers to further speed things up.
       
Internet2 has been used to test futuristic applications such       as 3D
 Brain Mapping, Remote Medical Specialist Evaluation       (real-time 
interaction among non-physician experts and supervising       physicians
 sharing large image data sets), Digital Video (including       
interactive two-way digital video), Remote Instruction and       
Teaching, and Remote Telescope Manipulation-to name a few.
       Internet2 has been compared to a time machine showing where       the [public] Internet will be in three to five years.
       The network uses two high-performance optical backbones: MCI     
  Worldcom’s very-high-performance bandwidth network service       
(vBNS), and Abilene, a 10,000-mile backbone built specifically       for
 Internet2. Methods are being developed to give some transmissions      
 higher priority. By marking data as “urgent,” researchers       can 
ensure that real-time data (say, a video stream of a surgery)       will
 cross the network before less time-sensitive data (e-mail).       
Multicasting allows a single data stream (such as a live video       
broadcast) to travel across the Internet and then split off       copies
 of itself to multiple destinations. Compared to this,       on the 
public Internet, the originating server must transmit       a separate 
data stream to each user, greatly increasing congestion.
Once you have your CCNA certification the obvious next step is making sure you are employed in one the many CCNA jobs.  New graduates with a CCNA certification, sometimes find it hard to immediately walk into CCNA jobs, but there are jobs out there.  Having a good plan of attack is the best approach to dealing with this problem.
As with any job search, when looking for a CCNA jobs, be clear on the jobs you are qualified to do and don’t aim for those completely out of your league. If you have just passed your CCNA and have limited prior experience you are simply not going to get a Network team leader role in a large organization. You need to be realistic.
Work on tailoring your resume/CV for any CCNA jobs that you apply for. If a company is looking for a particular skill make sure this is highlighted or prominent in some way on your resume. There is no point in hiding your light under a barrel when it comes to getting a job.
may swing the job decision. Your resume will have told the employer you are worth interviewing, along with everyone else that is interviewed. Your personal approach may be the difference between candidate A and B in the eyes of the employer. Listen the questions that are asked, and answer what is asked – don’t just tell them what you want them to hear.
As with any job search, when looking for a CCNA jobs, be clear on the jobs you are qualified to do and don’t aim for those completely out of your league. If you have just passed your CCNA and have limited prior experience you are simply not going to get a Network team leader role in a large organization. You need to be realistic.
It Is Important To Research CCNA Jobs
Being realistic means you need to research what CCNA jobs are out there and what indicative salaries are being paid for these roles. Finding out salary ranges is always the hardest part of any evaluation, but clues provided by online job search engines and industry publications will give you some indication. Then, concentrate on applying for the CCNA jobs that are within your reality.Work on tailoring your resume/CV for any CCNA jobs that you apply for. If a company is looking for a particular skill make sure this is highlighted or prominent in some way on your resume. There is no point in hiding your light under a barrel when it comes to getting a job.
CCNA Jobs – Prepare For Your Interview
Be well prepared for any job interview or any company interaction you may have. It is all well and good to have the qualifications for c c n a. but in the end it is likely to be your personality and how you present yourself at the interview thatmay swing the job decision. Your resume will have told the employer you are worth interviewing, along with everyone else that is interviewed. Your personal approach may be the difference between candidate A and B in the eyes of the employer. Listen the questions that are asked, and answer what is asked – don’t just tell them what you want them to hear.
 
This is one of the good articles i can find in the net explaining everything in detail regarding the topic. I thank you for taking your time sharing your thoughts and ideas to a lot of readers out there. Billboard advertising on the high road and in malls gets your potential clients while they are in shopping mode. A decent blurb crusade can incite customers to purchase your items there and after that, particularly on the off chance that you are running advancement as a motivator.
ReplyDelete