Showing posts with label Sikkim. Show all posts
Showing posts with label Sikkim. Show all posts

Tuesday, February 4, 2014

B.Sc. IT BT9003 (Semester 5, Data Storage Management) Assignment

Fall 2013 Assignment
Bachelor of Science in Information Technology (BSc IT) – Semester 5
BT9003 – Data Storage Management – 4 Credits
(Book ID: B1190)
Assignment Set (60 Marks)

1.      Discuss DAS, NAS and SAN storage technologies.
Ans.-   DAS (Direct Attached Storage):- When Windows servers leave the factory, they can be configured with several storage options.  Most servers will contain 1 or more local disk drives which are installed internal to the server’s cabinet.  These drives are typically used to install the operating system and user applications.  If additional storage is needed for user files or databases, it may be necessary to configure Direct Attached Storage (DAS).
DAS is well suited for a small-to-medium sized business where sufficient amounts of storage can be configured at a low startup cost.  The DAS enclosure will be a separate adjacent cabinet that contains the additional disk drives.  An internal PCI-based RAID controller is typically configured in the server to connect to the storage.  The SAS (Serial Attached SCSI) technology is used to connect the disk arrays as illustrated in the following example.

As mentioned, one of the primary benefits of DAS storage is the lower startup cost to implement.  Managing the storage array is done individually as the storage is dedicated to a particular server.  On the downside, there is typically limited expansion capability with DAS, and limited cabling options (1 to 4 meter cables).  Finally, because the RAID controller is typically installed in the server, there is a potential single point of failure for the DAS solution.

SAN (Storage Area Networks):- With Storage Area Networks (SAN), we typically see this solution used with medium-to-large size businesses, primarily due to the larger initial investment.  SANs require an infrastructure consisting of SAN switches, disk controllers, HBAs (host bus adapters) and fibre cables.  SANs leverage external RAID controllers and disk enclosures to provide high-speed storage for numerous potential servers.
The main benefit to a SAN-based storage solution is the ability to share the storage arrays to multiple servers.  This allows you to configure the storage capacity as needed, usually by a dedicated SAN administrator.  Higher levels of performance throughput are typical in a SAN environment, and data is highly available through redundant disk controllers and drives.  The disadvantages include a much higher startup cost for SANs, and they are inherently much more complex to manage.  The following diagram illustrates a typical SAN environment.

NAS (Network Attached Storage):- A third type of storage solution exists that is a hybrid option called Network Attached Storage (NAS).  This solution uses a dedicated server or “appliance” to serve the storage array.  The storage can be commonly shared to multiple clients at the same time across the existing Ethernet network.  The main difference between NAS and DAS and SAN is that NAS servers utilize file level transfers, while DAS and SAN solutions use block level transfers which are more efficient.
NAS storage typically has a lower startup cost because the existing network can be used.  This can be very attractive to small-to-medium size businesses.  Different protocols can be used for file sharing such as NFS for UNIX clients and CIF for Windows clients.  Most NAS models implement the storage arrays as iSCSI targets that can be shared across the networks.  Dedicated iSCSI networks can also be configured to maximize the network throughput.  The following diagram shows how a NAS configuration might look.


2.      Define Perimeter Defense and give examples of it.
Ans.-   Perimeter Defenses:- Used for security purposes to keep a zone secure. A secure zone is some combination of policies, procedures, technical tools, and techniques enabling a company to protect its information. Perimeter defenses provide a physical environment with management’s support in which privileges for access to all electronic assets are clearly laid out and observed. Some perimeter defense parameters include installing a security device at the entrance of and exit to a secure zone and installing an intrusion detection monitor outside the secure zone to monitor the zone. Other means of perimeter defense include ensuring that important servers within the zone have been hardened—meaning that special care has been taken to eliminate security holes and to shut down potentially vulnerable services—and that access into the secure zone is restricted to a set of configured IP addresses. Moreover, access to the security appliance needs to be logged and all changes to the security appliance need to be documented, and changes regarding the security appliance must require the approval of the secure zone’s owner. Finally, intrusion alerts detected in the zone must be immediately transmitted to the owner of the zone and to Information Security Services for rapid and effective resolution.
Following are the examples for perimeter defenses :
Firewall:- The primary method of protecting the corporate or home network from intruders is the firewall. Firewalls are designed to examine traffic as it comes in and deny entry to those who do not have access rights to the system. The most common functions of firewalls are proxy services, packet filtering, and network address translation (NAT).
Packet filtering admits or denies traffic attempting to access the network based on predefined rules. A common version of packet filtering is port blocking, in which all traffic to a particular TCP/IP port is blocked to all external connections. Host-based firewalls, common in home and small-business situations, use this method to protect individual desktop computers.
Network address translation services translate internal addresses into a range of external addresses. This allows the internal addressing scheme to be obscured to the outside world. It also makes it difficult for outside traffic to connect directly to an internal machine.
All firewalls provide a choke point through which an intruder must pass. Any or all traffic can then be examined, changed, or blocked depending on security policy.
Intrusion detection systems and intrusion response systems:- A device or software system that examines violations of security policy to determine if an attack is in progress or has occurred is called an Intrusion Detection System (IDS). An IDS does not regulate access to the network. Instead, it examines violations of security policy to determine whether an attack is in progress or has occurred. It then reports on the alleged attack.
Intrusion Response Systems are devices or software that are capable of actively responding to a breach in security. They not only detect an intrusion but also act on it in a predetermined manner.

3.      Explain SCSI Logical Units and Asymmetrical communications in SCSI.
Ans.-   SCSI logical units: SCSI targets have logical units that provide the processing context for SCSI commands. Essentially, a logical unit is a virtual machine (or virtual controller) that handles SCSI communications on behalf of real or virtual storage devices in a target. Commands received by targets are directed to the appropriate logical unit by a task router in the target controller. The work of the logical unit is split between two different functions the device server and the task manager. The device server executes commands received from initiators and is responsible for detecting and reporting errors that might occur. The task manager is the work scheduler for the logical unit, determining the order in which commands are processed in the queue and responding to requests from initiators about pending commands. The logical unit number (LUN) identifies a specific logical unit (think virtual controller) in a target. Although we tend to use the term LUN to refer to a real or virtual storage device, a LUN is an access point for exchanging commands and status information between initiators and targets. Metaphorically, a logical unit is a "black box" processor, and the LUN is simply a way to identify SCSI black boxes. Logical units are architecturally independent of target ports and can be accessed through any of the target's ports, via a LUN. A target must have at least one LUN, LUN 0, and might optionally support additional LUNs. For instance, a disk drive might use a single LUN, whereas a subsystem might allow hundreds of LUNs to be defined.

Asymmetrical communications in SCSI: Unlike most data networks, the communications model for SCSI is not symmetrical. Both sides perform different functions and interact with distinctly different users/applications. Initiators work on behalf of applications, issuing commands and then waiting for targets to respond. Targets do their work on behalf of storage media, waiting for commands to arrive from initiators and then reading and writing data to media.

4.      Explain techniques for switch based virtualization with necessary diagram.
Ans.-   As in array-based storage virtualization, fabric-based virtualization requires additional processing power and memory on top of a hardware architecture that is concurrently providing processing power for fabric services, switching and other tasks. Because large fabric switches (directors) are typically built on a chassis and option blade or line card scheme, virtualization capability is being introduced as yet another blade that slots into the director chassis, as shown in below Figure. This provides the advantage of tighter integration with the port cards that service storage and servers but consumes expensive director real estate for slot that could otherwise support additional end devices. If a virtualization blade is not properly engineered, it may degrade the overall availability specification of the director. A five-nines (99.999%) available director will inevitably lose some nines if a marginal option card is introduced.
Because software virtualization products have been around for some time, it is tempting to simply host one or another of those applications on a fabric switch. Typically, software virtualization runs on Windows or Linux, which in turn implies that a virtualization blade that hosts software will essentially be a PC on a card. This design has the advantage, for the vendor at least, of time to market, but as with host or appliance virtualization products in general, it may pose potential performance issues if the PC logic cannot cope with high traffic volumes. Consequently, some vendors are pursuing hardware-assisted virtualization on fabric switches by creating ASICs (application specific integrated circuits) that are optimized for high- performance frame decoding and block address mapping. These ASICs may be implemented on director blades or on auxiliary modules mounted in the director enclosure.

A storage virtualization engine as an option card within a director should enable virtualization of any storage asset on any director port.
Whether the fabric-based virtualization engine is hosted on a PC blade, an optimized ASIC blade or auxiliary module, it should have the flexibility to provide virtualization services to any port on the director. In a standard fabric architecture, frames are simply switched from one port to another based on destination Fibre Channel address. Depending on the virtualization method used, the fabric virtualization engine may intervene in this process by redirecting frames from various ports according to the requirements of the virtual logical address mapping of a virtualized LUN. In addition, if a storage asset is moved from one physical port to another, the virtualization engine must monitor the change in network address to preserve consistent device mapping. This adds considerable complexity to internal fabric management to accommodate the adds, moves and changes that are inevitable in storage networking.

5.      Explain in brief heterogeneous mirroring with necessary diagram.
Ans.-   Abstracting Physical Storage, storage virtualization enables mirroring or synchronized local data copying between dissimilar storage systems. Because the virtualization engine processes the SCSI I/O to physical storage and is represented as a single storage target to the server, virtualized mirroring can offer more flexible options than conventional disk-to-disk techniques.
In traditional single-vendor environments, mirroring is typically performed within a single array (one set of disk banks to another) or between adjacent arrays. Disk mirroring may be active/passive, in that the secondary mirror is only brought into service if the primary array fails, or active/active, in which case the secondary mirror can be accessed for read operations if the primary is busy. This not only increases performance but also enhances the value of the secondary mirror. In addition, some vendors provide mutual mirroring between disk arrays so that each array acts as a secondary mirror to its partner.
Heterogeneous mirroring under virtualization control allows mirroring operations to be configured from any physical storage assets and for any level of redundancy. As shown in below Figure, a server may perform traditional read and write operations to a virtualized primary volume. The target entity within the virtualization engine processes each write operation and acts as an initiator to copy it to two separate mirrors. The virtual mirrors, as well as the virtualized primary volume, may be composed of storage blocks from any combination of back-end physical storage arrays. In this example, the secondary mirror could be used to convenience non-disruptive storage processes such as archiving disk data to tape or migration of data from one class of storage to another.
Like traditional disk-based mirroring, this virtualized solution may be transparent to the host system, providing there is no significant performance impact in executing copies to heterogeneous storage. Transparency assumes, though, that the virtualizing is conducted by the fabric or an appliance attached to the fabric. Host-based virtualization would consume CPU cycles to perform multiple mirroring, and array-based virtualization typically cannot cross vendor lines. Because mirroring requires the completion of writes on the secondary mirrors before the next I/O is accepted, performance is largely dependent on the aggregate capabilities of the physical storage systems and the processing power of the virtualization engine itself.

Heterogeneous mirroring offers more flexible options than conventional mirroring, including three-way mirroring within storage capacity carved from different storage systems.

6.      Discuss Disk-to-disk-to-tape (D2D2T) technology in brief.
Ans.-   disk-to-disk-to-tape (D2D2T):- Disk-to-disk-to-tape (D2D2T) is an approach to computer storage backup and archiving in which data is initially copied to backup storage on a disk storage system and then periodically copied again to a tape storage system.
Disk-based backup systems and tape-based systems both have advantages and drawbacks. For many computer applications, it's important to have backup data immediately available when the primary disk becomes inaccessible. In this scenario, the time to restore data from tape would be considered unacceptable. Disk backup is a better solution because data transfer can be four-to-five times faster than is possible with tape. However, tape is a more economical way to archive data that needs to be kept for a long time. Tape is also portable, making it a good choice for off-site storage.
A D2D2T scheme provides the best of both worlds. It allows the administrator to automate daily backups on disk so he has the ability to implement fast restores and then move data to tape when he has time. The use of tape also makes it possible to move more mature data offsite for disaster recovery protection and to comply with regulatory policies for long-term data retention at a relatively inexpensive cost.
Disk-to-disk-to-tape is often used as part of a storage virtualization system where the storage administrator can express a company's needs in terms of storage policies rather than in terms of the physical devices to be used.


For More Assignments Click Here

B.Sc. IT BT8901 (Semester 5, Object Oriented Systems) Assignment

Fall 2013 Assignment
Bachelor of Science in Information Technology (BSc IT) – Semester 5
BT8901 – Object Oriented Systems – 4 Credits
(Book ID: B1185)
Assignment Set (60 Marks)

1.      Write a note on Principles of Object Oriented Systems.
Ans.-   The object model comes with a lot of terminology. A Smalltalk programmer uses methods, a C++ programmer uses virtual member functions, and a CLOS programmer uses generic functions. An Object Pascal programmer talks of a type correct, an Ada programmer calls the same thing a type conversion. To minimize the confusion, let‟s see what object orientation is.
Bhaskar has observed that the phrase object-oriented “has been bandied about with carefree abandon with much the same reverence accorded „motherhood,‟ „apple pie,‟ and „structured programming‟”. We can agree that the concept of an object is central to anything object-oriented. Stefik and Bobrow define objects as “entities that combine the properties of procedures and data since they perform computations and save local state”. Defining objects as entities asks the question somewhat, but the basic concept here is that objects serve to unify the ideas of algorithmic and data abstraction. Jones further clarifies this term by noting that “in the object model, emphasis is placed on crisply characterizing the components of the physical or abstract system to be modeled by a programmer system…. Objects have a certain „integrity‟ which should not – in fact, cannot – be violated. An object can only change state, behave, be manipulated, or stand in relation to other objects in ways appropriate to that object. An object is characterized by its properties and behavior.
Object-Oriented Programming:- Object-oriented programming is a method of implementation in which programs are organized as cooperative collections of objects, each of which represents an instance of some class, and whose classes are all members of a hierarchy of classes united via inheritance relationships.
There are three important parts to this definition: object-oriented programming (1) uses objects, not algorithms, as its fundamental logical building blocks (2) each object is an instance of some class, and (3) classes are related to one another via inheritance relationships.
Object-Oriented Design:- Generally, the design methods emphasize the proper and effective structuring of a complex system. Let‟s see the explanation for object oriented design.
Object-oriented design is a method of design encompassing the process of object-oriented decomposition and a notation for depicting both logical and physical as well as static and dynamic models of the system under design.
There are two important parts to this definition: object-oriented design (1) leads to an object-oriented decomposition and (2) uses different notations to express different models of the logical (class and object structure) and physical (module and process architecture) design of a system, in addition to the static and dynamic aspects of the system.
Object-Oriented Analysis:- Object-oriented analysis (or OOA, as it is sometimes called) emphasizes the building of real-world models, using an object-oriented view of the world. Object-oriented analysis is a method of analysis that examines requirements from the perspective of the classes and objects found in the vocabulary of the problem domain.

2.      What are objects? Explain characteristics of objects.
Ans.-   The term object was first formally utilized in the Simula language. The term object means a combination of data and logic that represents some real world entity.
When developing an object-oriented application, two basic questions always rise:
What objects does the application need?
What functionality should those objects have?

Programming in an object-oriented system consists of adding new kinds of objects to the system and defining how they behave.
The different characteristics of the objects are:
i) Objects are grouped in classes:- A class is a set of objects that share a common structure and a common behavior, a single object is simply an instance of a class. A class is a specification of structure (instance variables), behavior (methods), and inheritance for objects.

Anbu, Bala, Chandru, Deva, and Elango are instances or objects of the class Employee

Attributes: Object state and properties
Properties represent the state of an object. For example, in a car object, the manufacturer could be denoted by a name, a reference to a manufacturer object, or a corporate tax identification number. In general, object’s abstract state can be independent of its physical representation.

The attributes of a car object

ii) Objects have attributes and methods:- A method is a function or procedure that is defined for a class and typically can access the internal state of an object of that class to perform some operation. Behavior denotes the collection of methods that abstractly describes what an object is capable of doing. Each procedure defines and describes a particular behavior of the object. The object, called the receiver, is that on which the method operates. Methods encapsulate the behavior of the object. They provide interfaces to the object, and hide any of the internal structures and states maintained by the object.
iii) Objects respond to messages:- Objects perform operations in response to messages. The message is the instruction and the method is the implementation. An object or an instance of a class understands messages. A message has a name, just like method, such as cost, set cost, cooking time. An object understands a message when it can match the message to a method that has a same name as the message. To match up the message, an object first searches the methods defined by its class. If it is found, that method is called up. If not found, the object searches the superclass of its class. If it is found in a superclass, then that method is called up. Otherwise, it continues the search upward. An error occurs only if none of the superclasses contain the method.
Different objects can respond to the same message in different ways. In this way a message is different from a subroutine call. This is known as polymorphism, and this gives a great deal of flexibility. A message differs from a function in that a function says how to do something and a message says what to do. Example: draw is a message given to different objects.

Objects respond to messages according to methods defined in its class.

3.      What are behavioral things in UML mode? Explain two kinds of behavioral things.
Ans.-   Behavioral things are the dynamic parts of UML models. These are the verbs of a model, representing behavior over time and space. In all, there are two primary kinds of behavioral things.
1. Interaction
2. State Machine

Interaction: An interaction is a behavior that comprises a set of messages exchanged among a set of objects within a particular context to accomplish a specific purpose. The behavior of a society of objects or of an individual operation may be specified with an interaction. An interaction involves a number of other elements, including messages, action sequences (the behavior invoked by a message), and links (the connection between objects). Graphically, an interaction (message) is rendered as a directed line, almost always including the name of its operation, as in below Figure.

Interaction (message)

State Machine: A state machine is a behavior that specifies the sequences of states an object or an interaction that goes through during its lifetime in response to events, together with its responses to those events. The behavior of an individual class or a collaboration of classes may be specified with a state machine. A state machine involves a number of other elements, including states, transitions (the change from one state to another state), events (things that trigger a transition), and activities (the response to a transition). Graphically, a state is rendered as a rounded rectangle, usually including its name and its sub states, if any, as in below Figure.

State

4.      Write a short note on Class-Responsibility-Collaboration (CRC) Cards.
Ans.-   A Class Responsibility Collaborator (CRC) model (Beck & Cunningham 1989; Wilkinson 1995; Ambler 1995) is a collection of standard index cards that have been divided into three sections, as depicted in Figure 1. A class represents a collection of similar objects, a responsibility is something that a class knows or does, and a collaborator is another class that a class interacts with to fulfill its responsibilities.  Figure 2 presents an example of two hand-drawn CRC cards.

Figure 1. CRC Card Layout.


Figure 2. Hand-drawn CRC Cards.

Although CRC cards were originally introduced as a technique for teaching object-oriented concepts, they have also been successfully used as a full-fledged modeling technique. My experience is that CRC models are an incredibly effective tool for conceptual modeling as well as for detailed design.  CRC cards feature prominently in eXtreme Programming (XP) (Beck 2000) as a design technique.  My focus here is on applying CRC cards for conceptual modeling with your stakeholders.
A class represents a collection of similar objects. An object is a person, place, thing, event, or concept that is relevant to the system at hand. For example, in a university system, classes would represent students, tenured professors, and seminars. The name of the class appears across the top of a CRC card and is typically a singular noun or singular noun phrase, such as StudentProfessor, and Seminar. You use singular names because each class represents a generalized version of a singular object. Although there may be the student John O’Brien, you would model the class Student. The information about a student describes a single person, not a group of people. Therefore, it makes sense to use the name Student and not Students. Class names should also be simple. For example, which name is better: Student or Person who takes seminars?
A responsibility is anything that a class knows or does. For example, students have names, addresses, and phone numbers. These are the things a student knows. Students also enroll in seminars, drop seminars, and request transcripts. These are the things a student does. The things a class knows and does constitute its responsibilities. Important: A class is able to change the values of the things it knows, but it is unable to change the values of what other classes know.
Sometimes a class has a responsibility to fulfill, but not have enough information to do it. For example, as you see in Figure 3 students enroll in seminars. To do this, a student needs to know if a spot is available in the seminar and, if so, he then needs to be added to the seminar. However, students only have information about themselves (their names and so forth), and not about seminars. What the student needs to do is collaborate/interact with the card labeled Seminar to sign up for a seminar. Therefore, Seminar is included in the list of collaborators of Student.

Figure 3. Student CRC card.

Collaboration takes one of two forms: A request for information or a request to do something. For example, the card Student requests an indication from the card Seminar whether a space is available, a request for information.Student then requests to be added to the Seminar, a request to do something. Another way to perform this logic, however, would have been to have Student simply request Seminar to enroll himself into itself. Then have Seminardo the work of determining if a seat is available and, if so, then enrolling the student and, if not, then informing the student that he was not enrolled.

5.      Explain Modern Hierarchical Teams. Also draw its structure.
Ans.-   As just mentioned, the problem with traditional programmer teams is that it is all but impossible to find one individual who is both a highly skilled programmer and a successful manager. The solution is to use a matrix organizational structure and to replace the chief programmer by two individuals: a team leader, who is in charge of the technical aspects of the team‟s activities, and a team manager, who is responsible for all non-technical managerial decisions. The structure of the resulting team is shown in below figure.


Figure:-The Structure of a Modern Hierarchical Programming Team

It is important to realize that this organizational structure does not violate the fundamental managerial principle that no employee should report to more than one manager. The areas of responsibility are clearly delineated. The team leader is responsible for only technical management. Thus, budgetary and legal issues are not handled by the team leader, nor are performance appraisals. On the other hand, the team leader has sole responsibility on technical issues. The team manager, therefore, has no right to promise, say, that the information system will be delivered within four weeks; promises of that sort have to be made by the team leader.
Before implementation begins, it is important to demarcate clearly those areas that appear to be the responsibility of both the team manager and the team leader. For example, consider the issue of annual leave. The situation can arise that the team manager approves a leave application because leave is a non-technical issue, only to find the application vetoed by the team leader because a deadline is approaching. The solution to this and related issues is for higher management to draw up a policy regarding those areas that both the team manager and the team leader consider to be their responsibility.

6.      Explain in brief the five levels of CMM.
Ans.-   A maturity level is a well-defined evolutionary plateau toward achieving a mature software process. Each maturity level provides a layer in the foundation for continuous process improvement.
In CMMI models with a staged representation, there are five maturity levels designated by the numbers 1 through 5
1.      Initial
2.      Managed
3.      Defined
4.      Quantitatively Managed
5.      Optimizing

CMMI Staged Represenation- Maturity Levels

Maturity Level 1 - Initial
At maturity level 1, processes are usually ad hoc and chaotic. The organization usually does not provide a stable environment. Success in these organizations depends on the competence and heroics of the people in the organization and not on the use of proven processes.
Maturity level 1 organizations often produce products and services that work; however, they frequently exceed the budget and schedule of their projects.
Maturity level 1 organizations are characterized by a tendency to over commit, abandon processes in the time of crisis, and not be able to repeat their past successes.
Maturity Level 2 - Managed
At maturity level 2, an organization has achieved all the specific and generic goals of the maturity level 2 process areas. In other words, the projects of the organization have ensured that requirements are managed and that processes are planned, performed, measured, and controlled.
The process discipline reflected by maturity level 2 helps to ensure that existing practices are retained during times of stress. When these practices are in place, projects are performed and managed according to their documented plans.
At maturity level 2, requirements, processes, work products, and services are managed. The status of the work products and the delivery of services are visible to management at defined points.
Maturity Level 3 - Defined
At maturity level 3, an organization has achieved all the specific and generic goals of the process areas assigned to maturity levels 2 and 3.
At maturity level 3, processes are well characterized and understood, and are described in standards, procedures, tools, and methods.
A critical distinction between maturity level 2 and maturity level 3 is the scope of standards, process descriptions, and procedures. At maturity level 2, the standards, process descriptions, and procedures may be quite different in each specific instance of the process (for example, on a particular project). At maturity level 3, the standards, process descriptions, and procedures for a project are tailored from the organization's set of standard processes to suit a particular project or organizational unit. The organization's set of standard processes includes the processes addressed at maturity level 2 and maturity level 3. As a result, the processes that are performed across the organization are consistent except for the differences allowed by the tailoring guidelines.
Maturity Level 4 - Quantitatively Managed
At maturity level 4, an organization has achieved all the specific goals of the process areas assigned to maturity levels 2, 3, and 4 and the generic goals assigned to maturity levels 2 and 3.
At maturity level 4 Subprocesses are selected that significantly contribute to overall process performance. These selected subprocesses are controlled using statistical and other quantitative techniques.
Quantitative objectives for quality and process performance are established and used as criteria in managing processes. Quantitative objectives are based on the needs of the customer, end users, organization, and process implementers. Quality and process performance are understood in statistical terms and are managed throughout the life of the processes.
For these processes, detailed measures of process performance are collected and statistically analyzed. Special causes of process variation are identified and, where appropriate, the sources of special causes are corrected to prevent future occurrences.
Maturity Level 5 - Optimizing
At maturity level 5, an organization has achieved all the specific goals of the process areas assigned to maturity levels 2, 3, 4, and 5 and the generic goals assigned to maturity levels 2 and 3.
Processes are continually improved based on a quantitative understanding of the common causes of variation inherent in processes.
Maturity level 5 focuses on continually improving process performance through both incremental and innovative technological improvements.

Quantitative process-improvement objectives for the organization are established, continually revised to reflect changing business objectives, and used as criteria in managing process improvement.

For More Assignments Click Here

Tuesday, December 24, 2013

B.Sc. IT BT0088 (Semester 5, Cryptography and Network Security) Assignment

Fall 2013 Assignment
Bachelor of Science in Information Technology (BSc IT) – Semester 5
BT0088 – Cryptography and Network Security – 4 Credits
(Book ID: B1183)
Assignment Set (60 Marks)


1.      What is the need for security? Explain types of security attacks.
Ans.-    Computer security is required because many organizations will be damaged by hostile software or intruders. There may be several forms of damage which are obviously interrelated. These include:
·         Damage or destruction of computer systems.
·         Damage or destruction of internal data.
·         Loss of sensitive information to hostile parties.
·         Use of sensitive information to steal elements of monitary value.
·         Use of sensitive information against the customers which may result in legal action by customers against the organization and loss of customers.
·         Damage to the reputation of an organization.
·         Monitory damage, due to loss of sensitive information, destruction of data, hostile use of sensitive data, or damage to the reputation of the organization.

Types of Threats (Attacks)
Now you would see the various types of threats which a computing environment would encounter.

·         Interception:
This type of threat occurs when an unauthorized party(outsider) has gained access. The outside party can be a person, a program, or a computing system. Examples of this type of failure are illicit copying of program or data files, or wiretapping to obtain data in a network. Although a loss may be discovered fairly quickly, a silent interceptor may leave no traces by which the interception can be readily detected.

·         Interruption:
This occurs when an asset of the system becomes lost, unavailable, or unusable. An example is the malicious destruction of a hardware device, erasure of a program or data file, or malfunction of an operating system file manager so that it cannot find a particular disk file.

Passive attacks:- Passive attacks are in the nature of eavesdropping on, or monitoring of transmissions. The goal of the opponent is to obtain information that is being transmitted. Two types of passive attacks are release of message contents and traffic analysis.
The release of message content is easily understood. A telephone conversation, an electronic mail message, and a transferred file may contain sensitive or confidential information. We would like to prevent the opponent from learning the contents of these transmissions.
A second type of passive attack is traffic analysis. Suppose a sender is masking the content by using encryption( will be discussed later) an attacker still be able to observe the pattern of these messages. The attacker (Opponent) could determine the location and identify the communicating hosts and could observe the frequency and length of messages being exchanged. This information might be useful in guessing the nature of the communication that has taken place.
Passive attacks are very difficult to detect because they do not involve any alteration of the data.

Active Attacks:- Active attacks involve some modification of the data stream or the creation of a false stream and can be subdivided into four categories: masquerade, replay, modification of messages and denial of service.
A Masquerade takes place when one entity pretends to be a different entity. A masquerade attack usually includes one of the other forms of active attack. Replay involves the passive capture of a data unit and its subsequent retransmission to produce an unauthorized effect.

2.      List substitution techniques. Explain Ceaser’s cipher.
Ans.-    Substitutions are the simple form of encryption in which one letter is exchanged for another. A substitution is an acceptable way of encrypting text. There are four types of Substitutions techniques--

1.      The Caesar Cipher
2.      One-Time Pads
3.      The Vernam Cipher
4.      Book Cipher
The Caesar Cipher:- The Caesar cipher has an important place in history. Julius Caesar is said to have been the first to use this scheme, in which each letter is translated to a letter a fixed number of places after it in the alphabet. Caesar used a shift of 3, so that plaintext letter pi was enciphered as ciphertext letter ci by the rule
A full translation chart of the Caesar cipher is shown here.



Using this encryption, the message
SIKKIM MANIPAL UNIVERSITY
would be encoded as
S I K K I M M A N I P A L U N I V E R S I T Y
v l n n l p p d q l s d o x q l y h u v l w b

Cryptanalysis of the Caesar Cipher
Let us take a closer look at the result of applying Caesar's encryption technique to "SIKKIM MANIPAL UNIVERSITY" If we did not know the plaintext and were trying to guess it, we would have many clues from the ciphertext. For example, the break between the two words is preserved in the ciphertext, and double letters are preserved: The SS is translated to vv. We might also notice that when a letter is repeated, it maps again to the same ciphertext as it did previously. So the letter K always translate to n. These clues make this cipher easy to break.
Suppose you are given the following ciphertext message, and you want to try to determine the original plaintext.

wklv phvvdjh lv qrw wrr kdug wr euhdn

The message has actually been enciphered with a 27-symbol alphabet: A through Z plus the "blank" character or separator between words. As a start, assume that the coder was lazy and has allowed the blank to be translated to itself. If your assumption is true, it is an exceptional piece of information; knowing where the spaces are allows us to see which are the small words. English has relatively few small words, such as am, is, to, be, he, we, and, are, you, she, and so on. Therefore, one way to attack this problem and break the encryption is to substitute known short words at appropriate places in the ciphertext until you have something that seems to be meaningful. Once the small words fall into place, you can try substituting for matching characters at other places in the ciphertext.
Look again at the ciphertext you are decrypting. There is a strong clue in the repeated r of the word wrr. You might use this text to guess at three-letter words that you know. For instance, two very common three-letter words having the pattern xyy are see and too; other less common possibilities are add, odd, and off. (Of course, there are also obscure possibilities like woo or gee, but it makes more sense to try the common cases first.) Moreover, the combination wr appears in the ciphertext, too, so you can determine whether the first two letters of the three-letter word also form a two-letter word.

3.      Explain in brief types of encryption systems.
Ans.-    The two basic kinds of encryption systems are key based and block based. Key based encryption is based on either single key or multiple keys. Block based encryption is based on either stream or block of characters.

Based on Key :- We have two types of encryptions based on keys they are symmetric (also called "secret key") and asymmetric (also called "public key"). Symmetric algorithms use one key, which works for both encryption and decryption. Usually, the decryption algorithm is closely related to the encryption one.
The symmetric system means both encryption and the decryption are performed using the same key. They provide a two-way channel to their users: A and B share a secret key, and they can both encrypt information to send to the other as well as decrypt information from the other. As long as the key remains secret, the system also provides authentication, proof that a message received was not fabricated by someone other than the declared sender. Authenticity is ensured because only the legitimate sender can produce a message that will decrypt properly with the shared key.
Public key systems, on the other hand, excel at key management. By the nature of the public key approach, you can send a public key in an e-mail message or post it in a public directory. Only the corresponding private key, which presumably is kept private, can decrypt what has been encrypted with the public key.
But for both kinds of encryption, a key must be kept well secured. Once the symmetric or private key is known by an outsider, all messages written previously or in the future can be decrypted (and hence read or modified) by the outsider. So, for all encryption algorithms, key management is a major issue. It involves storing, safeguarding, and activating keys.

Based on Block:- Block based encryption system is classified as stream and block encryption system. Stream encryption algorithm convert one symbol of plaintext immediately into a symbol of ciphertext. The transformation depends only on the symbol, the key, and the control information of the encipherment algorithm. A model of stream enciphering is shown in below figure.



Some kinds of errors, such as skipping a character in the key during encryption, affect the encryption of all future characters. However, such errors can sometimes be recognized during decryption because the plaintext will be properly recovered up to a point, and then all following characters will be wrong. If that is the case, the receiver may be able to recover from the error by dropping a character of the key on the receiving end. Once the receiver has successfully recalibrated the key with the ciphertext, there will be no further effects from this error.
To address this problem and make it harder for a cryptanalyst to break the code, we can use block encryption algorithm. A block encryption encrypts a group of plaintext symbols as one block. The columnar transposition and other transpositions are examples of block ciphers. In the columnar transposition, the entire message is translated as one block. The block size need not have any particular relationship to the size of a character. Block ciphers work on blocks of plaintext and produce blocks of ciphertext, as shown in below figure. In this figure, the central box represents an encryption machine: The previous plaintext pair is converted to po, the current one being converted is IH, and the machine is soon to convert ES.




4.      Explain authentication header with necessary diagrams.
Ans.-    Authentication Header (AH) is one of the two core security protocols in IPSec protocol suite. AH provides data integrity, data source authentication, and protection against replay attacks. It does not provide confidentiality. This makes AH header much simpler than ESP. It is merely a header and not a header plus trailer. The below figure shows the AH protected IP packet.


It provides authentication of either all or part of the contents of a datagram through the addition of a header that is calculated based on the values in the datagram. What parts of the datagram are used for the calculation, and the placement of the header, depends on the mode (tunnel or transport) and the version of IP. The below figure shows the AH protocol structure.


The fields comprising the AH header are:
·         Next Header: The next header field identifies the protocol type of the next packet header after the AH packet header.
·         Payload Length: The length field states the length of the AH header information.
·         Reserved field: It is for future extensions of the AH protocol.
·         SPI field: shows to which SA the packet belongs.
·         Sequence number: It is an incrementing value that prevents against replay attacks.
·         The authentication data: contains the information for authenticating the packet.

The operation of the AH protocol is simple especially for any protocol that has anything to do with network security. It can be considered analogous to the algorithms used to calculate checksums or perform CRC checks for error detection. In those cases, a standard algorithm is used by the sender to compute a checksum or CRC code based on the contents of a message. This computed result is transmitted along with the original data to the destination, which repeats the calculation and discards the message if any discrepancy is found between its calculation and the one done by the source.
This is the same idea behind AH, except that instead of using a simple algorithm known to everyone, it uses a special hashing algorithm and a specific key known only to the source and the destination. SA between two devices is set up that specifies these particulars so that the source and destination know how to perform the computation, but nobody else can. On the source device, AH performs the computation and puts the result (called the Integrity Check Value or ICV) into a special header with other fields for transmission. The destination device does the same calculation using the key the two devices share, which enables it to see immediately if any of the fields in the original datagram were modified either due to error or malice.
It's important to point here that just as a checksum doesn't change the original data, neither does the ICV calculation change it. The presence of the AH header allows us to verify the integrity of the message, but doesn't encrypt it. Thus, AH provides authentication but not privacy.

5.      Explain the processing of Encrypted E-Mail.
Ans.-    The sender chooses a (random) symmetric algorithm encryption key. Then, the sender encrypts a copy of the entire message to be transmitted, including FROM:, TO:, SUBJECT:, and DATE: headers. Next, the sender prepends plaintext headers. For key management, the sender encrypts the message key under the recipient's public key, and attaches that to the message as well. The process of creating an encrypted e-mail message is shown in Figure A.


Encryption can potentially yield any string as output. Many e-mail handlers expect that message traffic will not contain characters other than the normal printable characters. Network e-mail handlers use unprintable characters as control signals in the traffic stream. To avoid problems in transmission, encrypted e-mail converts the entire ciphertext message to printable characters. An example of an encrypted e-mail message is shown in above Figure A. Notice the three portions: an external (plaintext) header, a section by which the message encryption key can be transferred, and the encrypted message itself. (The encryption is shown with shading.)


The encrypted e-mail standard works most easily as just described, using both symmetric and asymmetric encryption. The standard is also defined for symmetric encryption only: To use symmetric encryption, the sender and receiver must have previously established a shared secret encryption key. The processing type ("Proc-Type") field tells what privacy enhancement services have been applied. In the data exchange key field ("DEK-Info"), the kind of key exchange (symmetric or asymmetric) is shown. The key exchange ("Key-Info") field contains the message encryption key, encrypted under this shared encryption key. The field also identifies the originator (sender) so that the receiver can determine which shared symmetric key was used. If the key exchange technique were to use asymmetric encryption, the key exchange field would contain the message encryption field, encrypted under the recipient's public key. Also included could be the sender's certificate (used for determining authenticity and for generating replies).
To ensure the authenticity of the sender, the encrypted e-mail messages always carry a digital signature along with the message. The integrity is also assured because of a hash function (called a message integrity check, or MIC) in the digital signature. Optionally, encrypted e-mail messages can be encrypted for confidentiality.
Notice in above Figure A. that the header inside the message (in the encrypted portion) differs from that outside. A sender's identity or the actual subject of a message can be concealed within the encrypted portion.
The encrypted e-mail processing can integrate with ordinary e-mail packages, so a person can send both enhanced and nonenhanced messages, as shown in below Figure B. If the sender decides to add enhancements, an extra bit of encrypted e-mail processing is invoked on the sender's end; the receiver must also remove the enhancements. But without enhancements, messages flow through the mail handlers as usual.



6.      Explain characteristics of good security policy.
Ans.-    Characteristics of a good security policy
If a security policy is written poorly, it cannot guide the developers and users in providing appropriate security mechanisms to protect important assets. Certain characteristics make a security policy a good one.

1. Coverage: A security policy must be comprehensive: It must either apply to or explicitly exclude all possible situations. Furthermore, a security policy may not be updated as each new situation arises, so it must be general enough to apply naturally to new cases that occur as the system is used in unusual or unexpected ways.

2.Durability: A security policy must grow and adapt well. In large measure, it will survive the system's growth and expansion without change. If written in a flexible way, the existing policy will be applicable to new situations. However, there are times when the policy must change (such as when government regulations mandate new security constraints), so the policy must be changeable when it needs to be.
An important key to durability is keeping the policy free from ties to specific data or protection mechanisms that almost certainly will change. For example, an initial version of a security policy might require a ten-character password for anyone needing access to data on the Sun workstation in room 110. But when that workstation is replaced or moved, the policy's guidance becomes useless. It is preferable to describe assets needing protection in terms of their function and characteristics, rather than in terms of specific implementation. For example, the policy on Sun workstations could be re-worded to mandate strong authentication for access to sensitive student grades or customers' proprietary data. Better still, we can separate the elements of the policy, having one policy statement for student grades and another for customers' proprietary data. Similarly, we may want to define one policy that applies to preserving the confidentiality of relationships, and another protecting the use of system through strong authentication.

3.Realism: The policy must be realistic. That is, it must be possible to implement the stated security requirements with existing technology. Moreover, the implementation must be beneficial in terms of time, cost, and convenience; the policy should not recommend a control that works but prevents the system or its users from performing their activities and functions. It is important to make economically worthwhile investments in security, just as for any other careful business investment.


4.Usefulness: An obscure or incomplete security policy can not be implemented properly, if at all. The policy must be written in a language that can be read, understood and followed by anyone who must implement it or is affected by it. For this reason, the policy should be succinct, clear, and direct.

For More Assignments Click Here