Tuesday, February 18, 2014

Disable Internet Download Manager (IDM) automatic update

Disable Internet Download Manager (IDM) automatic update

Internet Download Manager (IDM) as the name itself suggests is windows software which helps to download files by splitting them automatically thereby increasing speed up to 5 times. It easy to use and supports resume and schedule downloads. It is compatible with various browsers and integrates with them for easy download.


IDM

Here is a trick to disable the automatic update of IDM, It’s a setting in Windows Registry. If you are unaware of Registry edit Please do not try, otherwise you may end up crashing your Operating System.


IDM_Registry_Fix

Step 1: Open Registry Editor. (Windows Key+R then type regedit )
Step 2: Navigate to HKEY_CURRENT_USER\Software\DownloadManager\LastCheck
Step 3: Change the date to any date in future like 14/04/25 (dd/mm/yy or mm/dd/yy)
Step 4: Close Registry Editor and restart IDM.

Watch this video for disable IDM automatic update

Saturday, February 15, 2014

Outlook 2010: Recover Deleted Emails | Recover Hard(Permanent) Deleted Emails in Microsoft Outlook 2010

Outlook 2010: Recover Deleted Emails | Recover Hard(Permanent) Deleted Emails in Microsoft Outlook 2010


Even though Outlook 2010 does not support recovery of deleted emails directly, you can recover your emails even after hard-core deletion without using any third-party components. Through successful integration of Outlook 2010 with Microsoft Exchange Server you can make it possible to recover the deleted mails. This function would only be available for POP3 mail account, you can’t revert any mails from IMAP account; Gmail, Yahoo, AOL, etc.
Launch Outlook 2010 and from left sidebar, under Outlook Data File, click Deleted Items. Now navigate to Folder tab. In Clean Up group you will see Recover Deleted Items button is grayed-out.
greyed out 1
To enable this option you need to properly configure Exchange server 2007/10 on your system. After proper configuration if it is still grayed-out, then first you need to close Outlook 2010 if it is running. From taskbar click Windows Start Orb button, type regedit and hit Enter.
Note: Enabling this feature requires modifying the registry, before you start off with it, make sure that you understand how to restore the registry and backup the concerned registry files to prevent occurring of any erratic response.
In Registry Editor dialog, start looking for;
\HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Exchange\Client\Options
In case you don’t find Options key, you can easily create it. Right-click Client and from New options clickKeyas shown in the screenshot below.
key
Now change the name of this key to Options, right-click in the right-side bar, From New options clickDWORD (32-bit) Value.
Dw 1
Change Dword’s name to DumpsterAlwaysOn, right-click it and change its Value data to 1. Click OK to continue.
change value
Close down registry editor. Launch Outlook 2010 and move to Folder tab. You will notice that Recover Deleted Items has been enabled now.
data outlook 1
If Exchange Server is properly configured, then it should work for you. Check for any discrepancies with Outlook 2010 integration because if there are any lack of correspondence then it won’t work out the way it should. After complete verification if it is still grayed out, contact your system administrator.

Sunday, February 9, 2014

Difference Between Windows Server 2003 and Windows Server 2008

Difference Between Windows Server 2003 and 
Windows Server 2008

1. 2008 is combination of vista and windows 2003R2.
2. RODC one new domain controller introduced in it.[Read-only Domain Controllers.]
3. WDS (windows deployment services) instead of RIS in 2003 server
4. Shadow copy for each and every folders
5. Boot sequence is changed
5. Installation is 32 bit where as 2003 it is 16 bit as well as 32 bit, that’s why installation of 2008
is faster
6. Services are known as role in it
7. Group policy editor is a separate option in ads.
8. The main difference between 2003 and 2008 is Virtualization, management..
9. 2008 has more inbuilt components and updated third party drivers Microsoft introduces new
feature with 2k8 that is Hyper-V Windows Server 2008 introduces Hyper-V (V for Virtualization)
but only on 64bit versions.
10.In Windows Server 2008, Microsoft is introducing new features and technologies, some of
which were not available in Windows Server 2003 with Service Pack 1 (SP1), that will help to
reduce the power consumption of server and client operating systems, minimize environmental
byproducts, and increase server efficiency.
11.Microsoft Windows Server 2008 has been designed with energy efficiency in mind, to provide
customers with ready and convenient access to a number of new power-saving features. It
includes updated support for Advanced Configuration and Power Interface (ACPI) processor
power management (PPM) features, including support for processor performance states (P-states)
and processor idle sleep states on multiprocessor systems. These features simplify power
management in Windows Server 2008 (WS08) and can be managed easily across servers and
clients using Group Policies.
12. Many features are updated - as security, IIS and RODC.In security it enable outbound
firewall as well as inbound, IIS 7 release, Read only Domain controllers.
13.Virtualization
14.Server Core provides the minimum installation required to carry out a specific server role,
such as for a DHCP, DNS or print server.
15.Boot sequence is changed
16.Role-based installation or, services are known as role in it
17.Better security
18.Enhanced terminal services
19.Network Access Protection
20. Microsoft's system for ensuring that clients connecting to Server 2008 are patched, running
a firewall and in compliance with corporate security policies.
21.PowerShell
22. IIS
23. Bitlocker
24.System drive encryption can be a sensible security measure for servers located in remote
branch offices.More and more companies are seeing this as a way of reducing hardware costs by
2/9/2014 IT-Support: Difference Between Windows 2003 and Windows 2008 Server?
http://itgutss.blogspot.in/2011/04/difference-between-windows-2003-and.html 2/2
running several 'virtual' servers on one physical machine. If you like this exciting technology,
make sure that you buy an edition of Windows Server 2008 that includes Hyper-V, then launch
the Server Manger, add Roles.
Windows Server 2008, formerly codenamed Longhorn, is no leas than 45 times faster than its
predecessor, Windows Server 2003, in terms of network transfer speeds. Now whatever the
perspective is on Microsoft's last 32-bit server operating system, the fact of the matter is that faster
transfer speeds for of up to 45 times is quite an evolution compared to Windows Server 2003
25. Windows Aero
26.we can install windows 2008 server either in full version(install all services& applications) or
server core(only install minimal required services), but in 2003 we can only install fully O.S.
27.Windows server 2008 use Hyper-V application & Roles concept for better productivity but
server 2003 does not have such features.
28. Windows Server 2008, Active Directory has been renamed to Active Directory
DomainServices (AD DS). AD DS retains the tools, architectural design, and structure that were
introduced in Windows 2000 Server and Windows Server 2003, with some added improvements.
29.2003 was made to control XP networks.
30.2008 is made to control Vista networks.
31.The group policy and active directory schemas have been altered to include Vista polices.

Tuesday, February 4, 2014

B.Sc. IT BT9003 (Semester 5, Data Storage Management) Assignment

Fall 2013 Assignment
Bachelor of Science in Information Technology (BSc IT) – Semester 5
BT9003 – Data Storage Management – 4 Credits
(Book ID: B1190)
Assignment Set (60 Marks)

1.      Discuss DAS, NAS and SAN storage technologies.
Ans.-   DAS (Direct Attached Storage):- When Windows servers leave the factory, they can be configured with several storage options.  Most servers will contain 1 or more local disk drives which are installed internal to the server’s cabinet.  These drives are typically used to install the operating system and user applications.  If additional storage is needed for user files or databases, it may be necessary to configure Direct Attached Storage (DAS).
DAS is well suited for a small-to-medium sized business where sufficient amounts of storage can be configured at a low startup cost.  The DAS enclosure will be a separate adjacent cabinet that contains the additional disk drives.  An internal PCI-based RAID controller is typically configured in the server to connect to the storage.  The SAS (Serial Attached SCSI) technology is used to connect the disk arrays as illustrated in the following example.

As mentioned, one of the primary benefits of DAS storage is the lower startup cost to implement.  Managing the storage array is done individually as the storage is dedicated to a particular server.  On the downside, there is typically limited expansion capability with DAS, and limited cabling options (1 to 4 meter cables).  Finally, because the RAID controller is typically installed in the server, there is a potential single point of failure for the DAS solution.

SAN (Storage Area Networks):- With Storage Area Networks (SAN), we typically see this solution used with medium-to-large size businesses, primarily due to the larger initial investment.  SANs require an infrastructure consisting of SAN switches, disk controllers, HBAs (host bus adapters) and fibre cables.  SANs leverage external RAID controllers and disk enclosures to provide high-speed storage for numerous potential servers.
The main benefit to a SAN-based storage solution is the ability to share the storage arrays to multiple servers.  This allows you to configure the storage capacity as needed, usually by a dedicated SAN administrator.  Higher levels of performance throughput are typical in a SAN environment, and data is highly available through redundant disk controllers and drives.  The disadvantages include a much higher startup cost for SANs, and they are inherently much more complex to manage.  The following diagram illustrates a typical SAN environment.

NAS (Network Attached Storage):- A third type of storage solution exists that is a hybrid option called Network Attached Storage (NAS).  This solution uses a dedicated server or “appliance” to serve the storage array.  The storage can be commonly shared to multiple clients at the same time across the existing Ethernet network.  The main difference between NAS and DAS and SAN is that NAS servers utilize file level transfers, while DAS and SAN solutions use block level transfers which are more efficient.
NAS storage typically has a lower startup cost because the existing network can be used.  This can be very attractive to small-to-medium size businesses.  Different protocols can be used for file sharing such as NFS for UNIX clients and CIF for Windows clients.  Most NAS models implement the storage arrays as iSCSI targets that can be shared across the networks.  Dedicated iSCSI networks can also be configured to maximize the network throughput.  The following diagram shows how a NAS configuration might look.


2.      Define Perimeter Defense and give examples of it.
Ans.-   Perimeter Defenses:- Used for security purposes to keep a zone secure. A secure zone is some combination of policies, procedures, technical tools, and techniques enabling a company to protect its information. Perimeter defenses provide a physical environment with management’s support in which privileges for access to all electronic assets are clearly laid out and observed. Some perimeter defense parameters include installing a security device at the entrance of and exit to a secure zone and installing an intrusion detection monitor outside the secure zone to monitor the zone. Other means of perimeter defense include ensuring that important servers within the zone have been hardened—meaning that special care has been taken to eliminate security holes and to shut down potentially vulnerable services—and that access into the secure zone is restricted to a set of configured IP addresses. Moreover, access to the security appliance needs to be logged and all changes to the security appliance need to be documented, and changes regarding the security appliance must require the approval of the secure zone’s owner. Finally, intrusion alerts detected in the zone must be immediately transmitted to the owner of the zone and to Information Security Services for rapid and effective resolution.
Following are the examples for perimeter defenses :
Firewall:- The primary method of protecting the corporate or home network from intruders is the firewall. Firewalls are designed to examine traffic as it comes in and deny entry to those who do not have access rights to the system. The most common functions of firewalls are proxy services, packet filtering, and network address translation (NAT).
Packet filtering admits or denies traffic attempting to access the network based on predefined rules. A common version of packet filtering is port blocking, in which all traffic to a particular TCP/IP port is blocked to all external connections. Host-based firewalls, common in home and small-business situations, use this method to protect individual desktop computers.
Network address translation services translate internal addresses into a range of external addresses. This allows the internal addressing scheme to be obscured to the outside world. It also makes it difficult for outside traffic to connect directly to an internal machine.
All firewalls provide a choke point through which an intruder must pass. Any or all traffic can then be examined, changed, or blocked depending on security policy.
Intrusion detection systems and intrusion response systems:- A device or software system that examines violations of security policy to determine if an attack is in progress or has occurred is called an Intrusion Detection System (IDS). An IDS does not regulate access to the network. Instead, it examines violations of security policy to determine whether an attack is in progress or has occurred. It then reports on the alleged attack.
Intrusion Response Systems are devices or software that are capable of actively responding to a breach in security. They not only detect an intrusion but also act on it in a predetermined manner.

3.      Explain SCSI Logical Units and Asymmetrical communications in SCSI.
Ans.-   SCSI logical units: SCSI targets have logical units that provide the processing context for SCSI commands. Essentially, a logical unit is a virtual machine (or virtual controller) that handles SCSI communications on behalf of real or virtual storage devices in a target. Commands received by targets are directed to the appropriate logical unit by a task router in the target controller. The work of the logical unit is split between two different functions the device server and the task manager. The device server executes commands received from initiators and is responsible for detecting and reporting errors that might occur. The task manager is the work scheduler for the logical unit, determining the order in which commands are processed in the queue and responding to requests from initiators about pending commands. The logical unit number (LUN) identifies a specific logical unit (think virtual controller) in a target. Although we tend to use the term LUN to refer to a real or virtual storage device, a LUN is an access point for exchanging commands and status information between initiators and targets. Metaphorically, a logical unit is a "black box" processor, and the LUN is simply a way to identify SCSI black boxes. Logical units are architecturally independent of target ports and can be accessed through any of the target's ports, via a LUN. A target must have at least one LUN, LUN 0, and might optionally support additional LUNs. For instance, a disk drive might use a single LUN, whereas a subsystem might allow hundreds of LUNs to be defined.

Asymmetrical communications in SCSI: Unlike most data networks, the communications model for SCSI is not symmetrical. Both sides perform different functions and interact with distinctly different users/applications. Initiators work on behalf of applications, issuing commands and then waiting for targets to respond. Targets do their work on behalf of storage media, waiting for commands to arrive from initiators and then reading and writing data to media.

4.      Explain techniques for switch based virtualization with necessary diagram.
Ans.-   As in array-based storage virtualization, fabric-based virtualization requires additional processing power and memory on top of a hardware architecture that is concurrently providing processing power for fabric services, switching and other tasks. Because large fabric switches (directors) are typically built on a chassis and option blade or line card scheme, virtualization capability is being introduced as yet another blade that slots into the director chassis, as shown in below Figure. This provides the advantage of tighter integration with the port cards that service storage and servers but consumes expensive director real estate for slot that could otherwise support additional end devices. If a virtualization blade is not properly engineered, it may degrade the overall availability specification of the director. A five-nines (99.999%) available director will inevitably lose some nines if a marginal option card is introduced.
Because software virtualization products have been around for some time, it is tempting to simply host one or another of those applications on a fabric switch. Typically, software virtualization runs on Windows or Linux, which in turn implies that a virtualization blade that hosts software will essentially be a PC on a card. This design has the advantage, for the vendor at least, of time to market, but as with host or appliance virtualization products in general, it may pose potential performance issues if the PC logic cannot cope with high traffic volumes. Consequently, some vendors are pursuing hardware-assisted virtualization on fabric switches by creating ASICs (application specific integrated circuits) that are optimized for high- performance frame decoding and block address mapping. These ASICs may be implemented on director blades or on auxiliary modules mounted in the director enclosure.

A storage virtualization engine as an option card within a director should enable virtualization of any storage asset on any director port.
Whether the fabric-based virtualization engine is hosted on a PC blade, an optimized ASIC blade or auxiliary module, it should have the flexibility to provide virtualization services to any port on the director. In a standard fabric architecture, frames are simply switched from one port to another based on destination Fibre Channel address. Depending on the virtualization method used, the fabric virtualization engine may intervene in this process by redirecting frames from various ports according to the requirements of the virtual logical address mapping of a virtualized LUN. In addition, if a storage asset is moved from one physical port to another, the virtualization engine must monitor the change in network address to preserve consistent device mapping. This adds considerable complexity to internal fabric management to accommodate the adds, moves and changes that are inevitable in storage networking.

5.      Explain in brief heterogeneous mirroring with necessary diagram.
Ans.-   Abstracting Physical Storage, storage virtualization enables mirroring or synchronized local data copying between dissimilar storage systems. Because the virtualization engine processes the SCSI I/O to physical storage and is represented as a single storage target to the server, virtualized mirroring can offer more flexible options than conventional disk-to-disk techniques.
In traditional single-vendor environments, mirroring is typically performed within a single array (one set of disk banks to another) or between adjacent arrays. Disk mirroring may be active/passive, in that the secondary mirror is only brought into service if the primary array fails, or active/active, in which case the secondary mirror can be accessed for read operations if the primary is busy. This not only increases performance but also enhances the value of the secondary mirror. In addition, some vendors provide mutual mirroring between disk arrays so that each array acts as a secondary mirror to its partner.
Heterogeneous mirroring under virtualization control allows mirroring operations to be configured from any physical storage assets and for any level of redundancy. As shown in below Figure, a server may perform traditional read and write operations to a virtualized primary volume. The target entity within the virtualization engine processes each write operation and acts as an initiator to copy it to two separate mirrors. The virtual mirrors, as well as the virtualized primary volume, may be composed of storage blocks from any combination of back-end physical storage arrays. In this example, the secondary mirror could be used to convenience non-disruptive storage processes such as archiving disk data to tape or migration of data from one class of storage to another.
Like traditional disk-based mirroring, this virtualized solution may be transparent to the host system, providing there is no significant performance impact in executing copies to heterogeneous storage. Transparency assumes, though, that the virtualizing is conducted by the fabric or an appliance attached to the fabric. Host-based virtualization would consume CPU cycles to perform multiple mirroring, and array-based virtualization typically cannot cross vendor lines. Because mirroring requires the completion of writes on the secondary mirrors before the next I/O is accepted, performance is largely dependent on the aggregate capabilities of the physical storage systems and the processing power of the virtualization engine itself.

Heterogeneous mirroring offers more flexible options than conventional mirroring, including three-way mirroring within storage capacity carved from different storage systems.

6.      Discuss Disk-to-disk-to-tape (D2D2T) technology in brief.
Ans.-   disk-to-disk-to-tape (D2D2T):- Disk-to-disk-to-tape (D2D2T) is an approach to computer storage backup and archiving in which data is initially copied to backup storage on a disk storage system and then periodically copied again to a tape storage system.
Disk-based backup systems and tape-based systems both have advantages and drawbacks. For many computer applications, it's important to have backup data immediately available when the primary disk becomes inaccessible. In this scenario, the time to restore data from tape would be considered unacceptable. Disk backup is a better solution because data transfer can be four-to-five times faster than is possible with tape. However, tape is a more economical way to archive data that needs to be kept for a long time. Tape is also portable, making it a good choice for off-site storage.
A D2D2T scheme provides the best of both worlds. It allows the administrator to automate daily backups on disk so he has the ability to implement fast restores and then move data to tape when he has time. The use of tape also makes it possible to move more mature data offsite for disaster recovery protection and to comply with regulatory policies for long-term data retention at a relatively inexpensive cost.
Disk-to-disk-to-tape is often used as part of a storage virtualization system where the storage administrator can express a company's needs in terms of storage policies rather than in terms of the physical devices to be used.


For More Assignments Click Here

B.Sc. IT BT8901 (Semester 5, Object Oriented Systems) Assignment

Fall 2013 Assignment
Bachelor of Science in Information Technology (BSc IT) – Semester 5
BT8901 – Object Oriented Systems – 4 Credits
(Book ID: B1185)
Assignment Set (60 Marks)

1.      Write a note on Principles of Object Oriented Systems.
Ans.-   The object model comes with a lot of terminology. A Smalltalk programmer uses methods, a C++ programmer uses virtual member functions, and a CLOS programmer uses generic functions. An Object Pascal programmer talks of a type correct, an Ada programmer calls the same thing a type conversion. To minimize the confusion, let‟s see what object orientation is.
Bhaskar has observed that the phrase object-oriented “has been bandied about with carefree abandon with much the same reverence accorded „motherhood,‟ „apple pie,‟ and „structured programming‟”. We can agree that the concept of an object is central to anything object-oriented. Stefik and Bobrow define objects as “entities that combine the properties of procedures and data since they perform computations and save local state”. Defining objects as entities asks the question somewhat, but the basic concept here is that objects serve to unify the ideas of algorithmic and data abstraction. Jones further clarifies this term by noting that “in the object model, emphasis is placed on crisply characterizing the components of the physical or abstract system to be modeled by a programmer system…. Objects have a certain „integrity‟ which should not – in fact, cannot – be violated. An object can only change state, behave, be manipulated, or stand in relation to other objects in ways appropriate to that object. An object is characterized by its properties and behavior.
Object-Oriented Programming:- Object-oriented programming is a method of implementation in which programs are organized as cooperative collections of objects, each of which represents an instance of some class, and whose classes are all members of a hierarchy of classes united via inheritance relationships.
There are three important parts to this definition: object-oriented programming (1) uses objects, not algorithms, as its fundamental logical building blocks (2) each object is an instance of some class, and (3) classes are related to one another via inheritance relationships.
Object-Oriented Design:- Generally, the design methods emphasize the proper and effective structuring of a complex system. Let‟s see the explanation for object oriented design.
Object-oriented design is a method of design encompassing the process of object-oriented decomposition and a notation for depicting both logical and physical as well as static and dynamic models of the system under design.
There are two important parts to this definition: object-oriented design (1) leads to an object-oriented decomposition and (2) uses different notations to express different models of the logical (class and object structure) and physical (module and process architecture) design of a system, in addition to the static and dynamic aspects of the system.
Object-Oriented Analysis:- Object-oriented analysis (or OOA, as it is sometimes called) emphasizes the building of real-world models, using an object-oriented view of the world. Object-oriented analysis is a method of analysis that examines requirements from the perspective of the classes and objects found in the vocabulary of the problem domain.

2.      What are objects? Explain characteristics of objects.
Ans.-   The term object was first formally utilized in the Simula language. The term object means a combination of data and logic that represents some real world entity.
When developing an object-oriented application, two basic questions always rise:
What objects does the application need?
What functionality should those objects have?

Programming in an object-oriented system consists of adding new kinds of objects to the system and defining how they behave.
The different characteristics of the objects are:
i) Objects are grouped in classes:- A class is a set of objects that share a common structure and a common behavior, a single object is simply an instance of a class. A class is a specification of structure (instance variables), behavior (methods), and inheritance for objects.

Anbu, Bala, Chandru, Deva, and Elango are instances or objects of the class Employee

Attributes: Object state and properties
Properties represent the state of an object. For example, in a car object, the manufacturer could be denoted by a name, a reference to a manufacturer object, or a corporate tax identification number. In general, object’s abstract state can be independent of its physical representation.

The attributes of a car object

ii) Objects have attributes and methods:- A method is a function or procedure that is defined for a class and typically can access the internal state of an object of that class to perform some operation. Behavior denotes the collection of methods that abstractly describes what an object is capable of doing. Each procedure defines and describes a particular behavior of the object. The object, called the receiver, is that on which the method operates. Methods encapsulate the behavior of the object. They provide interfaces to the object, and hide any of the internal structures and states maintained by the object.
iii) Objects respond to messages:- Objects perform operations in response to messages. The message is the instruction and the method is the implementation. An object or an instance of a class understands messages. A message has a name, just like method, such as cost, set cost, cooking time. An object understands a message when it can match the message to a method that has a same name as the message. To match up the message, an object first searches the methods defined by its class. If it is found, that method is called up. If not found, the object searches the superclass of its class. If it is found in a superclass, then that method is called up. Otherwise, it continues the search upward. An error occurs only if none of the superclasses contain the method.
Different objects can respond to the same message in different ways. In this way a message is different from a subroutine call. This is known as polymorphism, and this gives a great deal of flexibility. A message differs from a function in that a function says how to do something and a message says what to do. Example: draw is a message given to different objects.

Objects respond to messages according to methods defined in its class.

3.      What are behavioral things in UML mode? Explain two kinds of behavioral things.
Ans.-   Behavioral things are the dynamic parts of UML models. These are the verbs of a model, representing behavior over time and space. In all, there are two primary kinds of behavioral things.
1. Interaction
2. State Machine

Interaction: An interaction is a behavior that comprises a set of messages exchanged among a set of objects within a particular context to accomplish a specific purpose. The behavior of a society of objects or of an individual operation may be specified with an interaction. An interaction involves a number of other elements, including messages, action sequences (the behavior invoked by a message), and links (the connection between objects). Graphically, an interaction (message) is rendered as a directed line, almost always including the name of its operation, as in below Figure.

Interaction (message)

State Machine: A state machine is a behavior that specifies the sequences of states an object or an interaction that goes through during its lifetime in response to events, together with its responses to those events. The behavior of an individual class or a collaboration of classes may be specified with a state machine. A state machine involves a number of other elements, including states, transitions (the change from one state to another state), events (things that trigger a transition), and activities (the response to a transition). Graphically, a state is rendered as a rounded rectangle, usually including its name and its sub states, if any, as in below Figure.

State

4.      Write a short note on Class-Responsibility-Collaboration (CRC) Cards.
Ans.-   A Class Responsibility Collaborator (CRC) model (Beck & Cunningham 1989; Wilkinson 1995; Ambler 1995) is a collection of standard index cards that have been divided into three sections, as depicted in Figure 1. A class represents a collection of similar objects, a responsibility is something that a class knows or does, and a collaborator is another class that a class interacts with to fulfill its responsibilities.  Figure 2 presents an example of two hand-drawn CRC cards.

Figure 1. CRC Card Layout.


Figure 2. Hand-drawn CRC Cards.

Although CRC cards were originally introduced as a technique for teaching object-oriented concepts, they have also been successfully used as a full-fledged modeling technique. My experience is that CRC models are an incredibly effective tool for conceptual modeling as well as for detailed design.  CRC cards feature prominently in eXtreme Programming (XP) (Beck 2000) as a design technique.  My focus here is on applying CRC cards for conceptual modeling with your stakeholders.
A class represents a collection of similar objects. An object is a person, place, thing, event, or concept that is relevant to the system at hand. For example, in a university system, classes would represent students, tenured professors, and seminars. The name of the class appears across the top of a CRC card and is typically a singular noun or singular noun phrase, such as StudentProfessor, and Seminar. You use singular names because each class represents a generalized version of a singular object. Although there may be the student John O’Brien, you would model the class Student. The information about a student describes a single person, not a group of people. Therefore, it makes sense to use the name Student and not Students. Class names should also be simple. For example, which name is better: Student or Person who takes seminars?
A responsibility is anything that a class knows or does. For example, students have names, addresses, and phone numbers. These are the things a student knows. Students also enroll in seminars, drop seminars, and request transcripts. These are the things a student does. The things a class knows and does constitute its responsibilities. Important: A class is able to change the values of the things it knows, but it is unable to change the values of what other classes know.
Sometimes a class has a responsibility to fulfill, but not have enough information to do it. For example, as you see in Figure 3 students enroll in seminars. To do this, a student needs to know if a spot is available in the seminar and, if so, he then needs to be added to the seminar. However, students only have information about themselves (their names and so forth), and not about seminars. What the student needs to do is collaborate/interact with the card labeled Seminar to sign up for a seminar. Therefore, Seminar is included in the list of collaborators of Student.

Figure 3. Student CRC card.

Collaboration takes one of two forms: A request for information or a request to do something. For example, the card Student requests an indication from the card Seminar whether a space is available, a request for information.Student then requests to be added to the Seminar, a request to do something. Another way to perform this logic, however, would have been to have Student simply request Seminar to enroll himself into itself. Then have Seminardo the work of determining if a seat is available and, if so, then enrolling the student and, if not, then informing the student that he was not enrolled.

5.      Explain Modern Hierarchical Teams. Also draw its structure.
Ans.-   As just mentioned, the problem with traditional programmer teams is that it is all but impossible to find one individual who is both a highly skilled programmer and a successful manager. The solution is to use a matrix organizational structure and to replace the chief programmer by two individuals: a team leader, who is in charge of the technical aspects of the team‟s activities, and a team manager, who is responsible for all non-technical managerial decisions. The structure of the resulting team is shown in below figure.


Figure:-The Structure of a Modern Hierarchical Programming Team

It is important to realize that this organizational structure does not violate the fundamental managerial principle that no employee should report to more than one manager. The areas of responsibility are clearly delineated. The team leader is responsible for only technical management. Thus, budgetary and legal issues are not handled by the team leader, nor are performance appraisals. On the other hand, the team leader has sole responsibility on technical issues. The team manager, therefore, has no right to promise, say, that the information system will be delivered within four weeks; promises of that sort have to be made by the team leader.
Before implementation begins, it is important to demarcate clearly those areas that appear to be the responsibility of both the team manager and the team leader. For example, consider the issue of annual leave. The situation can arise that the team manager approves a leave application because leave is a non-technical issue, only to find the application vetoed by the team leader because a deadline is approaching. The solution to this and related issues is for higher management to draw up a policy regarding those areas that both the team manager and the team leader consider to be their responsibility.

6.      Explain in brief the five levels of CMM.
Ans.-   A maturity level is a well-defined evolutionary plateau toward achieving a mature software process. Each maturity level provides a layer in the foundation for continuous process improvement.
In CMMI models with a staged representation, there are five maturity levels designated by the numbers 1 through 5
1.      Initial
2.      Managed
3.      Defined
4.      Quantitatively Managed
5.      Optimizing

CMMI Staged Represenation- Maturity Levels

Maturity Level 1 - Initial
At maturity level 1, processes are usually ad hoc and chaotic. The organization usually does not provide a stable environment. Success in these organizations depends on the competence and heroics of the people in the organization and not on the use of proven processes.
Maturity level 1 organizations often produce products and services that work; however, they frequently exceed the budget and schedule of their projects.
Maturity level 1 organizations are characterized by a tendency to over commit, abandon processes in the time of crisis, and not be able to repeat their past successes.
Maturity Level 2 - Managed
At maturity level 2, an organization has achieved all the specific and generic goals of the maturity level 2 process areas. In other words, the projects of the organization have ensured that requirements are managed and that processes are planned, performed, measured, and controlled.
The process discipline reflected by maturity level 2 helps to ensure that existing practices are retained during times of stress. When these practices are in place, projects are performed and managed according to their documented plans.
At maturity level 2, requirements, processes, work products, and services are managed. The status of the work products and the delivery of services are visible to management at defined points.
Maturity Level 3 - Defined
At maturity level 3, an organization has achieved all the specific and generic goals of the process areas assigned to maturity levels 2 and 3.
At maturity level 3, processes are well characterized and understood, and are described in standards, procedures, tools, and methods.
A critical distinction between maturity level 2 and maturity level 3 is the scope of standards, process descriptions, and procedures. At maturity level 2, the standards, process descriptions, and procedures may be quite different in each specific instance of the process (for example, on a particular project). At maturity level 3, the standards, process descriptions, and procedures for a project are tailored from the organization's set of standard processes to suit a particular project or organizational unit. The organization's set of standard processes includes the processes addressed at maturity level 2 and maturity level 3. As a result, the processes that are performed across the organization are consistent except for the differences allowed by the tailoring guidelines.
Maturity Level 4 - Quantitatively Managed
At maturity level 4, an organization has achieved all the specific goals of the process areas assigned to maturity levels 2, 3, and 4 and the generic goals assigned to maturity levels 2 and 3.
At maturity level 4 Subprocesses are selected that significantly contribute to overall process performance. These selected subprocesses are controlled using statistical and other quantitative techniques.
Quantitative objectives for quality and process performance are established and used as criteria in managing processes. Quantitative objectives are based on the needs of the customer, end users, organization, and process implementers. Quality and process performance are understood in statistical terms and are managed throughout the life of the processes.
For these processes, detailed measures of process performance are collected and statistically analyzed. Special causes of process variation are identified and, where appropriate, the sources of special causes are corrected to prevent future occurrences.
Maturity Level 5 - Optimizing
At maturity level 5, an organization has achieved all the specific goals of the process areas assigned to maturity levels 2, 3, 4, and 5 and the generic goals assigned to maturity levels 2 and 3.
Processes are continually improved based on a quantitative understanding of the common causes of variation inherent in processes.
Maturity level 5 focuses on continually improving process performance through both incremental and innovative technological improvements.

Quantitative process-improvement objectives for the organization are established, continually revised to reflect changing business objectives, and used as criteria in managing process improvement.

For More Assignments Click Here