Popular Posts

Sunday, July 29, 2012

History of Personal Computers


A history of personal computers

A personal computer (PC) is usually a microcomputer whose price, size, and capabilities
make it suitable for personal usage. The term was popularized by IBM marketing.
Time share "terminals" to central computers were sometimes used before the advent of the PC.
(A smart terminal — televideo ASCII character mode terminal made around 1982.)
Before their advent in the late 1970s to the early 1980s, the only computers one might have
used if one were privileged were "computer-terminal based" architectures owned by large
institutions. In these, the technology was called "computer time share systems", and used
minicomputers and main frame computers. These central computer systems frequently required
large rooms — roughly, a handball-court-sized room could hold two to three small minicomputers
and its associated peripherals, each housed in cabinets much the size of three refrigerators side by
side (with blinking lights and tape drives). In that era, mainframe computers occupied whole floors;
a big hard disk was a mere 10–20 Megabytes mounted on a cabinet the size of a small chest-type
freezer. Earlier PCs were generally called desktop computers, and the slower Pentium-based
personal computer of the late 1990s could easily outperform the advanced minicomputers of that
era.
Since the terms "personal computer" and "PC" have been introduced to vernacular language,
their meanings and scope have changed somewhat. The first generations of personal
microcomputers were usually sold as kits or merely instructions, and required a somewhat skilled
person to assemble and operate them. These were usually called microcomputers, but personal
computer was also used. Later generations were sometimes interchangeably called by the names
"home computer" and "personal computer." By the mid-1980s, "home computer" was becoming a
less common label in favor of "personal computer." These computers were pre-assembled and
required little to no technical knowledge to operate. In today's common usage, personal computer
and PC usually indicate an IBM PC compatible. Because of this association, some manufacturers of
personal computers that are not IBM PCs avoid explicitly using the terms to describe their products.
Mostly, the term PC is used to describe personal computers that use Microsoft Windows operating
systems.
A four-megabyte RAM card measuring about 22 by 15 inches; made for the VAX 8600
minicomputer (circa 1986). Dual in-line package (DIP) Integrated circuits populate nearly the
whole board; the RAM chips are in the majority located in the rectangular areas to the left and right.
One early use of "personal computer" appeared in a 3 November 1962, New York Times article
reporting John W. Mauchly's vision of future computing as detailed at a recent meeting of the
American Institute of Industrial Engineers. Mauchly stated, "There is no reason to suppose the
average boy or girl cannot be master of a personal computer." [1] Some of the first computers that
might be called "personal" were early minicomputers such as the LINC and PDP-8. By today's
standards they were very large (about the size of a refrigerator) and cost prohibitive (typically tens
of thousands of US dollars), and thus were rarely purchased by an individual. However, they were
much smaller, less expensive, and generally simpler to operate than many of the mainframe
computers of the time. Therefore, they were accessible for individual laboratories and research
projects. Minicomputers largely freed these organizations from the batch processing and
bureaucracy of a commercial or university computing center.
In addition, minicomputers were relatively interactive and soon had their own operating
systems. Eventually, the minicomputer included VAX and larger minicomputers from Data
General, Prime, and others. The minicomputer era largely was a precursor to personal computer
usage and an intermediary step from mainframes.
Development of the single-chip microprocessor was an enormous catalyst to the
popularization of cheap, easy to use, and truly personal computers. Arguably the first true "personal
computer" was the Altair 8800, which brought affordable computing to an admittedly select market
in the 1970s. However, it was arguably this computer that spawned the development of both Apple
Computer as well as Microsoft, spawning the Altair BASIC programming language interpreter,
Microsoft's first product. The first generation of microcomputers (computers based on a
microprocessor) that appeared in the mid-1970s, due to the success of the Steve Wozniak-designed
Apple Computer release, the Apple II, were usually known as home computers. These were less
capable and in some ways less versatile than large business computers of the day. They were
generally used by computer enthusiasts for learning to program, running simple office/productivity
applications, electronics interfacing, and general hobbyist pursuits.
It was the launch of the VisiCalc spreadsheet, initially for the Apple II (and later for the
Atari 8-bit family, Commodore PET, and IBM PC) that became the "killer app" that turned the
microcomputer into a business tool. This was followed by the August 1981 release of the IBM PC
which would revolutionize the computer market. The Lotus 1-2-3, a combined spreadsheet (partly
based on VisiCalc), presentation graphics, and simple database application, would become the PC's
own killer app. Good word processor programs would also appear for many home computers, in
particular the introduction of Microsoft Word for the Apple Macintosh in 1985 (while earlier
versions of Word had been created for the PC, it became popular initially through the Macintosh).

  In the January 3, 1983 issue of Time magazine the personal computer was named the "Machine of
the Year" or its Person of the Year for 1982. During the 1990s, the power of personal computers
increased radically, blurring the formerly sharp distinction between personal computers and multiuser
computers such as mainframes. Today higher-end computers often distinguish themselves from
personal computers by greater reliability or greater ability to multitask, rather than by brute CPU ability.

Saturday, July 28, 2012

Introduction to obect-oriented programming using C++


1 Introduction

  This tutorial is a collection of lectures to be held in the on-line course Introduction to obect-oriented programming using C++  . In this course, object-orientation is introduced as a new programming concept which should help you in developing high quality software. Object-orientation is also introduced as a concept which makes developing of projects easier. However, this is not a course for learning the C++ programming language. In this tutorial only those language concepts that are needed to present coding examples are introduced. And what makes object-orientation such a hot topic? To be honest, not everything that is sold under the term of object-orientation is really new. For example, there are programs written in procedural languages like Pascal or C which use object-oriented concepts. But there exist a few important features which these languages won't handle or won't handle very well, respectively.
Some people will say that object-orientation is ``modern''. When reading announcements of new products everything seems to be ``object-oriented''. ``Objects'' are everywhere. In this tutorial we will try to outline characteristics of object-orientation to allow you to judge those object-oriented products.
The tutorial is organized as follows. Chapter 2 presents a brief overview of procedural programming to refresh your knowledge in that area. 

A Survey of Programming Techniques


This chapter is a short survey of programming techniques. We use a simple example to illustrate the particular properties and to point out their main ideas and problems.
Roughly speaking, we can distinguish the following learning curve of someone who learns to program:
  • Unstructured programming,
  • procedural programming,
  • modular programming and
  • object-oriented programming.
This chapter is organized as follows. Sections 2.1 to 2.3 briefly describe the first three programming techniques. Subsequently, we present a simple example of how modular programming can be used to implement a singly linked list module (section 2.4). Using this we state a few problems with this kind of technique in section 2.5. Finally, section 2.6 describes the fourth programming technique.

2.1 Unstructured Programming

  Usually, people start learning programming by writing small and simple programs consisting only of one main program. Here ``main program'' stands for a sequence of commands or statements which modify data which is global throughout the whole program. We can illustrate this as shown in Fig. 2.1.


Figure 2.1: Unstructured programming. The main program directly operates on global data.
\begin{figure}
{\centerline{
\psfig {file=FIGS/programming1.eps,width=3cm}
}}\end{figure}

As you should all know, this programming techniques provide tremendous disadvantages once the program gets sufficiently large. For example, if the same statement sequence is needed at different locations within the program, the sequence must be copied. This has lead to the idea to extract these sequences, name them and offering a technique to call and return from these procedures.

2.2 Procedural Programming

  With procedural programming you are able to combine returning sequences of statements into one single place. A procedure call is used to invoke the procedure. After the sequence is processed, flow of control proceeds right after the position where the call was made (Fig. 2.2).


Figure 2.2:  Execution of procedures. After processing flow of controls proceed where the call was made.
\begin{figure}
{\centerline{
\psfig {file=FIGS/procedure-call.eps,width=3cm}
}}\end{figure}

With introducing parameters as well as procedures of procedures ( subprocedures) programs can now be written more structured and error free. For example, if a procedure is correct, every time it is used it produces correct results. Consequently, in cases of errors you can narrow your search to those places which are not proven to be correct.
Now a program can be viewed as a sequence of procedure calls[*]. The main program is responsible to pass data to the individual calls, the data is processed by the procedures and, once the program has finished, the resulting data is presented. Thus, the flow of data can be illustrated as a hierarchical graph, a tree, as shown in Fig. 2.3 for a program with no subprocedures.


Figure 2.3:  Procedural programming. The main program coordinates calls to procedures and hands over appropriate data as parameters.
\begin{figure}
{\centerline{
\psfig {file=FIGS/procedural.eps,width=5cm}
}}\end{figure}

To sum up: Now we have a single program which is devided into small pieces called procedures. To enable usage of general procedures or groups of procedures also in other programs, they must be separately available. For that reason, modular programming allows grouping of procedures into modules.

2.3 Modular Programming

  With modular programming procedures of a common functionality are grouped together into separate modules. A program therefore no longer consists of only one single part. It is now devided into several smaller parts which interact through procedure calls and which form the whole program (Fig. 2.4).


Figure 2.4:  Modular programming. The main program coordinates calls to procedures in separate modules and hands over appropriate data as parameters.
\begin{figure}
{\centerline{
\psfig {file=FIGS/modular.eps,width=7cm}
}}\end{figure}

Each module can have its own data. This allows each module to manage an internal state which is modified by calls to procedures of this module. However, there is only one state per module and each module exists at most once in the whole program.

2.4 An Example with Data Structures

  Programs use data structures to store data. Several data structures exist, for example lists, trees, arrays, sets, bags or queues to name a few. Each of these data structures can be characterized by their structure and their access methods.

2.4.1 Handling Single Lists

  You all know singly linked lists which use a very simple structure, consisting of elements which are strung together, as shown in Fig. 2.5).


Figure 2.5:  Structure of a singly linked list.
\begin{figure}
{\centerline{
\psfig {file=FIGS/sll.eps,width=0.9\textwidth}
}}\end{figure}

Singly linked lists just provides access methods to append a new element to their end and to delete the element at the front. Complex data structures might use already existing ones. For example a queue can be structured like a singly linked list. However, queues provide access methods to put a data element at the end and to get the first data element (first-in first-out (FIFO) behaviour).
We will now present an example which we use to present some design concepts. Since this example is just used to illustrate these concepts and problems it is neither complete nor optimal. Refer to chapter 10 for a complete object-oriented discussion about the design of data structures.
Suppose you want to program a list in a modular programming language such as C or Modula-2. As you believe that lists are a common data structure, you decide to implement it in a separate module. Typically, this requires you to write two files: the interface definition and the implementation file. Within this chapter we will use a very simple pseudo code which you should understand immediately. Let's assume, that comments are enclosed in ``/* ... */''. Our interface definition might then look similar to that below:
    /* 
     * Interface definition for a module which implements
     * a singly linked list for storing data of any type.
     */
       
    MODULE Singly-Linked-List-1

    BOOL list_initialize();
    BOOL list_append(ANY data);
    BOOL list_delete();
         list_end();

    ANY list_getFirst();
    ANY list_getNext();
    BOOL list_isEmpty();

    END Singly-Linked-List-1
Interface definitions just describe what is available and not how it is made available. You hide the information of the implementation in the implementation file. This is a fundamental principle in software engineering, so let's repeat it: You hide information of the actual implementation (information hiding). This enables you to change the implementation, for example to use a faster but more memory consuming algorithm for storing elements without the need to change other modules of your program: The calls to provided procedures remain the same.
The idea of this interface is as follows: Before using the list one has to call list_initialize() to initialize variables local to the module. The following two procedures implement the mentioned access methods append and delete. The append procedure needs a more detailed discussion. Function list_append() takes one argumentdata of arbitrary type. This is necessary since you wish to use your list in several different environments, hence, the type of the data elements to be stored in the list is not known beforehand. Consequently, you have to use a special type ANY which allows to assign data of any type to it[*]. The third procedure list_end() needs to be called when the program terminates to enable the module to clean up its internally used variables. For example you might want to release allocated memory.
With the next two procedures list_getFirst() and list_getNext() a simple mechanism to traverse through the list is offered. Traversing can be done using the following loop:
    ANY data;

    data <- list_getFirst();
    WHILE data IS VALID DO
        doSomething(data);
        data <- list_getNext();
    END
Now you have a list module which allows you to use a list with any type of data elements. But what, if you need more than one list in one of your programs?

2.4.2 Handling Multiple Lists

  You decide to redesign your list module to be able to manage more than one list. You therefore create a new interface description which now includes a definition for a list handle. This handle is used in every provided procedure to uniquely identify the list in question. Your interface definition file of your new list module looks like this:
    /* 
     * A list module for more than one list.
     */

    MODULE Singly-Linked-List-2

    DECLARE TYPE list_handle_t;

    list_handle_t list_create();
                  list_destroy(list_handle_t this);
    BOOL          list_append(list_handle_t this, ANY data);
    ANY           list_getFirst(list_handle_t this);
    ANY           list_getNext(list_handle_t this);
    BOOL          list_isEmpty(list_handle_t this);
    
    END Singly-Linked-List-2;
You use DECLARE TYPE to introduce a new type list_handle_t which represents your list handle. We do not specify, how this handle is actually represented or even implemented. You also hide the implementation details of this type in your implementation file. Note the difference to the previous version where you just hide functions or procedures, respectively. Now you also hide information for an user defined data type called list_handle_t.
You use list_create() to obtain a handle to a new thus empty list. Every other procedure now contains the special parameter this which just identifies the list in question. All procedures now operate on this handle rather than a module global list.
Now you might say, that you can create list objects. Each such object can be uniquely identified by its handle and only those methods are applicable which are defined to operate on this handle.

2.5 Modular Programming Problems

  The previous section shows, that you already program with some object-oriented concepts in mind. However, the example implies some problems which we will outline now.

2.5.1 Explicit Creation and Destruction

In the example every time you want to use a list, you explicitly have to declare a handle and perform a call to list_create() to obtain a valid one. After the use of the list you must explicitly call list_destroy() with the handle of the list you want to be destroyed. If you want to use a list within a procedure, say, foo() you use the following code frame:
    PROCEDURE foo() BEGIN
        list_handle_t myList;
        myList <- list_create();

        /* Do something with myList */
        ...

        list_destroy(myList);
    END
Let's compare the list with other data types, for example an integer. Integers are declared within a particular scope (for example within a procedure). Once you've defined them, you can use them. Once you leave the scope (for example the procedure where the integer was defined) the integer is lost. It is automatically created and destroyed. Some compilers even initialize newly created integers to a specific value, typically 0 (zero).
Where is the difference to list ``objects''? The lifetime of a list is also defined by its scope, hence, it must be created once the scope is entered and destroyed once it is left. On creation time a list should be initialized to be empty. Therefore we would like to be able to define a list similar to the definition of an integer. A code frame for this would look like this:
    PROCEDURE foo() BEGIN
        list_handle_t myList; /* List is created and initialized */

        /* Do something with the myList */
        ...
     END /* myList is destroyed */
The advantage is, that now the compiler takes care of calling initialization and termination procedures as appropriate. For example, this ensures that the list is correctly deleted, returning resources to the program.

2.5.2 Decoupled Data and Operations

Decoupling of data and operations leads usually to a structure based on the operations rather than the data: Modules group common operations (such as those list_...() operations) together. You then use these operations by providing explicitly the data to them on which they should operate. The resulting module structure is therefore oriented on the operations rather than the actual data. One could say that the defined operations specify the data to be used.
In object-orientation, structure is organized by the data. You choose the data representations which best fit your requirements. Consequently, your programs get structured by the data rather than operations. Thus, it is exactly the other way around: Data specifies valid operations. Now modules group data representations together.

2.5.3 Missing Type Safety

  In our list example we have to use the special type ANY to allow the list to carry any data we like. This implies, that the compiler cannot guarantee for type safety. Consider the following example which the compiler cannot check for correctness:
    PROCEDURE foo() BEGIN
        SomeDataType data1;
        SomeOtherType data2;
        list_handle_t myList;

        myList <- list_create();
        list_append(myList, data1);
        list_append(myList, data2); /* Oops */

        ...

        list_destroy(myList);
    END
It is in your responsibility to ensure that your list is used consistently. A possible solution is to additionally add information about the type to each list element. However, this implies more overhead and does not prevent you from knowing what you are doing.
What we would like to have is a mechanism which allows us to specify on which data type the list should be defined. The overall function of the list is always the same, whether we store apples, numbers, cars or even lists. Therefore it would be nice to declare a new list with something like:
    list_handle_t<Apple> list1; /* a list of apples */
    list_handle_t<Car> list2; /* a list of cars */
The corresponding list routines should then automatically return the correct data types. The compiler should be able to check for type consistency.

2.5.4 Strategies and Representation

The list example implies operations to traverse through the list. Typically a cursor is used for that purpose which points to the current element. This implies atraversing strategy which defines the order in which the elements of the data structure are to be visited.
For a simple data structure like the singly linked list one can think of only one traversing strategy. Starting with the leftmost element one successively visits the right neighbours until one reaches the last element. However, more complex data structures such as trees can be traversed using different strategies. Even worse, sometimes traversing strategies depend on the particular context in which a data structure is used. Consequently, it makes sense to separate the actual representation or shape of the data structure from its traversing strategy. We will investigate this in more detail in chapter 10.
What we have shown with the traversing strategy applies to other strategies as well. For example insertion might be done such that an order over the elements is achieved or not.

2.6 Object-Oriented Programming

  Object-oriented programming solves some of the problems just mentioned. In contrast to the other techniques, we now have a web of interacting objects, each house-keeping its own state (Fig. 2.6).


Figure 2.6:  Object-oriented programming. Objects of the program interact by sending messages to each other.
\begin{figure}
{\centerline{
\psfig {file=FIGS/object-oriented.eps,width=7cm}
}}\end{figure}

Consider the multiple lists example again. The problem here with modular programming is, that you must explicitly create and destroy your list handles. Then you use the procedures of the module to modify each of your handles.
In contrast to that, in object-oriented programming we would have as many list objects as needed. Instead of calling a procedure which we must provide with the correct list handle, we would directly send a message to the list object in question. Roughly speaking, each object implements its own module allowing for example many lists to coexist.
Each object is responsible to initialize and destroy itself correctly. Consequently, there is no longer the need to explicitly call a creation or termination procedure.
You might ask: So what? Isn't this just a more fancier modular programming technique? You were right, if this would be all about object-orientation. Fortunately, it is not. Beginning with the next chapters additional features of object-orientation are introduced which makes object-oriented programming to a new programming technique.

2.7 Exercises

 
1.
The list examples include the special type ANY to allow a list to carry data of any type. Suppose you want to write a module for a specialized list of integers which provides type checking. All you have is the interface definition of module Singly-Linked-List-2.
(a)
How does the interface definition for a module Integer-List look like?
(b)
Discuss the problems which are introduced with using type ANY for list elements in module Singly-Linked-List-2.
(c)
What are possible solutions to these problems?
2.
What are the main conceptual differences between object-oriented programming and the other programming techniques?
3.
If you are familiar with a modular programming language try to implement module Singly-Linked-List-2. Subsequently, implement a list of integers and a list of integer lists with help of this module.

Eligibility Criteria for IT

Eligibility Criteria for IT :
1. Open only to the students with following degrees
- Category 1: BE / B Tech / ME / M Tech / MCA / M Sc (Computer Science / IT / Software Engg)
- Category 2: B Sc / BCA / M Sc (except Computer Science / IT / Software Engg)
2. Year of graduation: 2011 batch only
3. Consistent First Class (over 60%) in X, XII, UG and PG (if applicable)
4. No outstanding arrears
5. Candidates with degrees through correspondence/ part-time courses are not eligible to apply
6. Good interpersonal, analytical and communication skills
Eligibility Criteria for IT IS :
1. Open only to the students with following degrees
- BSC – Computer Science/Computer Technology/ IT /Maths/Statistics/Electronics and BCA
- MSC – Maths/Statistics/Electronics
2. Year of graduation: 2010 or 2011 batch only
3. Consistent First Class (over 60%) in X, XII, and UG
4. Candidates holding correspondence or part time degrees are not eligible to apply
5. Good interpersonal and excellent communication skills
6. Willingness to work in shifts (including night shifts)
7. Willing to work at any Cognizant location across India
Eligibility Criteria for BPO :
1. Any Arts & Science graduate except BSC – IT/CS, Electronics, Maths & Statistics
2. Hotel Management & MBA graduates are also eligible
3. Year of Graduation: 2010 or 2011 batch only
4. Consistency of 50% in X, XII, and UG
5. Good verbal and excellent communication skills
6. Willingness to work in shifts (including night shifts

Companies after MCA


·  WIPRO

·  HP
·  ASTRA MICROWAVE PRODUCTS LTD
·  KOGEN-X PVT LTD
·  TECH MAHINDRA
·  21 CENTURY WEB
·  INFOTECH SOLUTIONS
·  ACCENTURE
·  INFOSYS
·  CTS
·  SUN MICRO SYSTEMS
·  SYNGENTA INDIA LTD
·  KREETI TECH
·  SYNTEL
·  Game loft
·  Satyam Computers
·  SilverTouch Technologies Limited

Friday, July 27, 2012

Operating System


Today's Operating System 

Command line interface (or CLI) operating systems can operate using only the keyboard for
input. Modern OS's use a mouse for input with a graphical user interface (GUI) sometimes
implemented as a shell. The appropriate OS may depend on the hardware architecture, specifically
the CPU, with only Linux and BSD running on almost any CPU. Windows NT has been ported to
other CPUs, most notably the Alpha, but not many. Since the early 1990s the choice for personal
computers has been largely limited to the Microsoft Windows family and the Unix-like family, of
which Linux and Mac OS X are becoming the major choices. Mainframe computers and embedded
systems use a variety of different operating systems, many with no direct connection to Windows or
Unix, but typically more similar to Unix than Windows.
• Personal computers
o IBM PC compatible - Microsoft Windows and smaller Unix-variants (like Linux and
BSD)
o Apple Macintosh - Mac OS X, Windows, Linux and BSD
• Mainframes - A number of unique OS's, sometimes Linux and other Unix variants.
• Embedded systems - a variety of dedicated OS's, and limited versions of Linux or other OS's
Unix-like
The Unix-like family is a diverse group of operating systems, with several major subcategories
including System V, BSD, and Linux. The name "Unix" is a trademark of The Open
Group which licenses it for use to any operating system that has been shown to conform to the
definitions that they have cooperatively developed. The name is commonly used to refer to the large
set of operating systems which resemble the original Unix.
Unix systems run on a wide variety of machine architectures. They are used heavily as
server systems in business, as well as workstations in academic and engineering environments. Free
software Unix variants, such as Linux and BSD, are increasingly popular. They are used in the
desktop market as well, for example Ubuntu, but mostly by hobbyists.
Some Unix variants like HP's HP-UX and IBM's AIX are designed to run only on that
vendor's proprietary hardware. Others, such as Solaris, can run on both proprietary hardware and on
commodity x86 PCs. Apple's Mac OS X, a microkernel BSD variant derived from NeXTSTEP,
Mach, and FreeBSD, has replaced Apple's earlier (non-Unix) Mac OS. Over the past several years,
free Unix systems have supplanted proprietary ones in most instances. For instance, scientific
modeling and computer animation were once the province of SGI's IRIX. Today, they are
dominated by Linux-based or Plan 9 clusters.
The team at Bell Labs who designed and developed Unix went on to develop Plan 9 and
Inferno, which were designed for modern distributed environments. They had graphics built-in,
unlike Unix counterparts that added it to the design later. Plan 9 did not become popular because,
unlike many Unix distributions, it was not originally free. It has since been released under Free
Software and Open Source Lucent Public License, and has an expanding community of developers.
Inferno was sold to Vita Nuova and has been released under a GPL/MIT license.
Microsoft Windows
The Microsoft Windows family of operating systems originated as a graphical layer on top of
the older MS-DOS environment for the IBM PC. Modern versions are based on the newer Windows
NT core that first took shape in OS/2 and borrowed from OpenVMS. Windows runs on 32-bit and
64-bit Intel and AMD computers, although earlier versions also ran on the DEC Alpha, MIPS, and
PowerPC architectures (some work was done to port it to the SPARC architecture).
As of 2004, Windows held a near-monopoly of around 90% of the worldwide desktop market share,
although this is thought to be dwindling due to the increase of interest focused on open source
operating systems. [1] It is also used on low-end and mid-range servers, supporting applications
such as web servers and database servers. In recent years, Microsoft has spent significant marketing

Delhi University(DU) MCA Syllabus 2012


DU MCA Syllabus 2012

DU MCA Syllabus 2012

Delhi University Entrance Test shall have the following components : Mathematical Ability, Computer Science, Logical Reasoning, and English Comprehension
Delhi University MCA Syllabus 2012 :
1.     Mathematics : Mathematics at the level of B.Sc. program of the University of Delhi.
2.     Computer Science : Introduction to Computer organization including data representation, Boolean circuits and their simplification, basics of combinational circuits; C – programming : Data types including user defined data types, constants and variables, operators and expressions, control structures, modularity: use of functions, scope, arrays.
3.     Logical ability & English Comprehension : Problem – solving using basic concepts of arithmetic, algebra, geometry and data analysis. Correct usage of English Language and Reading comprehension.
Delhi University M.Sc. Syllabus 2012 :
I. Computer Science
1.     Discrete Structures : Sets, functions, relations, counting; generating functions, recurrence relations and their solutions; algorithmic complexity, growth of functions and asymptotic notations.
2.     Programming, Data Structures and Algorithms : Data types, control structures, functions / modules, object – oriented programming concepts : sub – typing, inheritance, classes and subclasses, etc. Basic data structures like stacks, linked list, queues, trees, binary search tree, AVL and B+ trees; sorting, searching, order statistics, graph algorithms, greedy algorithms and dynamic programming.
3.     Computer System Architecture : Boolean algebra and computer arithmetic, flip – flops, design of combinational and sequential circuits, instruction formats, addressing modes, interfacing peripheral devices, types of memory and their organization, interrupts and exceptions.
4.     Operating Systems : Basic functionalities, multiprogramming, multiprocessing, multithreading, timesharing, real – time operating system; processor management, process synchronization, memory management, device management, file management, security and protection; case study : Linux.
5.     Software Engineering : Software process models, requirement analysis, software specification, software testing, software project management techniques, quality assurance.
6.     DBMS and File Structures : File organization techniques, database approach, data models, DBMS architecture; data independence, E – R model, relational data models, SQL, normalization and functional dependencies.
7.     Computer Networks : ISO – OSI and TCP / IP models, basic concepts like transmission media, signal encoding, modulation techniques, multiplexing, error detection and correction; overview of LAN / MAN / WAN; data link, MAC, network, transport and application layer protocol features; network security.
II. Mathematics
1.     Algebra : Groups, subgroups, normal subgroups, cosets, Lagrange’s theorem, rings and their properties, commutative rings, integral domains and fields, sub rings, ideals and their elementary properties. Vector space, subspace and its properties, linear independence and dependence of vectors, matrices, rank of a matrix, reduction to normal forms, linear homogeneous and non – homogenous equations, Cayley – Hamilton theorem, characteristic roots and vectors. De Moivre’s theorem, relation between roots and coefficient of nth degree equation, solution to cubic and biquadratic equation, transformation of equations.
2.     Calculus : Limit and continuity, differentiability of functions, successive differentiation, Leibnitz’s theorem, partial differentiation, Eider’s theorem on homogenous functions, tangents and normal, asymptotes, singular points, curve tracing, reduction formulae, integration and properties of definite integrals, quadrature, rectification of curves, volumes and surfaces of solids of revolution.
3.     Geometry : System of circles, parabola, ellipse and hyperbola, classification and tracing of curves of second degree, sphere, cones, cylinders and their properties.
4.     Vector Calculus : Differentiation and partial differentiation of a vector function, derivative of sum, dot product and cross product, gradient, divergence and curl.
5.     Differential Equations : Linear, homogenous and bi – homogenous equations, separable equations, first order higher degree equations, algebraic properties of solutions, Wronskian – its properties and applications, linear homogenous equations with constant coefficients, solution of second order differential equations. Linear non – homogenous differential equations, the method of undetermined coefficients, Euler’s equations, simultaneous differential equations and total differential equations.
6.     Real Analysis : Neighborhoods, open and closed sets, limit points and Bolzano Weiestrass theorem, continuous functions, sequences and their; properties, limit superior and limit inferior of a sequence, infinite series and their convergence. Rolle’s theorem, mean value theorem, Taylor’s theorem, Taylor’s series, Maclaurin’s series, maxima and minima, indeterminate forms.
7.     Probability and Statistics : Measures of dispersion and their properties, skewness and kurtosis, introduction to probability, theorems of total and compound probability, Bayes theorem random variables, and probability distributions and density functions, mathematical expectation, moment generating functions, cumulants and their relation with moments, binomial Poisson and normal distributions and their properties, correlation and regression, method of least squares, introduction to sampling and sampling distributions like Chi – square,t and Fdistributions, test of significance based on t, Chi – square and Fdistributions.
III. English Comprehension :
Correct usage of English language and reading comprehension.

Thursday, July 26, 2012

MCA ENTRANCE SYLLABUS


MCA ENTRANCE SYLLABUS
Syllabus for MCA Entrance Examination
Entrance Test shall have the following components: Mathematical Ability, Computer Science, Logical Reasoning, and English Comprehension
Syllabus for entrance test is given below:
Mathematics: Mathematics at the level of B. Sc. program of the University of Delhi. Computer Science: Introduction to Computer organization including data representation, Boolean circuits and their simplification, basics of combinational circuits; C - programming: Data types including user defined data types, constants and variables, operators and expressions, control structures, modularity: use of functions, scope, arrays.
Logical ability & English Comprehension: Problem-solving using basic concepts of arithmetic, algebra, geometry and data analysis. Correct usage of English Language and Reading comprehension.
The syllabus for the M.Sc. (Computer Science) Entrance Test would be as follows:
Computer Science
Discrete Structures: Sets, functions, relations, counting; generating functions, recurrence relations and their solutions; algorithmic complexity, growth of functions and asymptotic notations.
Programming, Data Structures and Algorithms: Data types, control structures, functions/modules, object-oriented programming concepts: sub-typing, inheritance, classes and subclasses, etc. Basic data structures like stacks, linked list, queues, trees, binary search tree, AVL and B+ trees; sorting, searching, order statistics, graph algorithms, greedy algorithms and dynamic programming
Computer System Architecture: Boolean algebra and computer arithmetic, flip-flops, design of combinational and sequential circuits, instruction formats, addressing modes, interfacing peripheral devices, types of memory and their organization, interrupts and exceptions.
Operating Systems: Basic functionalities, multiprogramming, multiprocessing, multithreading, timesharing, real-time operating system; processor management, process synchronization, memory management, device management, file management, security and protection; case study: Linux.
Software Engineering: Software process models, requirement analysis, software specification, software testing, software project management techniques, quality assurance.
DBMS and File Structures: File organization techniques, database approach, data models, DBMS architecture; data independence, E-R model, relational data models, SQL, normalization and functional dependencies.
Computer Networks: ISO-OSI and TCP/IP models, basic concepts like transmission media, signal encoding, modulation techniques, multiplexing, error detection and correction; overview of LAN/MAN/ WAN; data link, MAC, network, transport and application layer protocol features; network security.
Mathematics
Algebra: Groups, subgroups, normal subgroups, cosets, Lagrange’s theorem, rings and their properties, commutative rings, integral domains and fields, sub rings, ideals and their elementary properties. Vector space, subspace and its properties, linear independence and dependence of vectors, matrices, rank of a matrix, reduction to normal forms, linear homogeneous and non-homogenous equations, Cayley-Hamilton theorem, characteristic roots and vectors. De Moivre’s theorem, relation between roots and coefficient of nth degree equation, solution to cubic and biquadratic equation, transformation of equations.
Calculus: Limit and continuity, differentiability of functions, successive differentiation, Leibnitz’s theorem, partial differentiation, Eider’s theorem on homogenous functions, tangents and normal, asymptotes, singular points, curve tracing, reduction formulae, integration and properties of definite integrals, quadrature, rectification of curves, volumes and surfaces of solids of revolution.
Geometry: System of circles, parabola, ellipse and hyperbola, classification and tracing of curves of second degree, sphere, cones, cylinders and their properties.
Vector Calculus: Differentiation and partial differentiation of a vector function, derivative of sum, dot product and cross product, gradient, divergence and curl.
Differential Equations: Linear, homogenous and bi-homogenous equations, separable equations, first order higher degree equations, algebraic properties of solutions, Wronskian-its properties and applications, linear homogenous equations with constant coefficients, solution of second order differential equations. Linear non-homogenous differential equations, the method of undetermined coefficients, Euler’s equations, simultaneous differential equations and total differential equations.
Real Analysis: Neighborhoods, open and closed sets, limit points and Bolzano Weiestrass theorem, continuous functions, sequences and their; properties, limit superior and limit inferior of a sequence, infinite series and their convergence. Rolle’s theorem, mean value theorem, Taylor’s theorem, Taylor’s series, Maclaurin’s series, maxima and minima, indeterminate forms.
Probability and Statistics: Measures of dispersion and their properties, skewness and kurtosis, introduction to probability, theorems of total and compound probability, Bayes theorem random variables, and probability distributions and density functions, mathematical expectation, moment generating functions, cumulants and their relation with moments, binomial Poisson and normal distributions and their properties, correlation and regression, method of least squares, introduction to sampling and sampling distributions like Chi-square,t and Fdistributions, test of significance based on t, Chi-square and Fdistributions.
*As per MCA Entrance Notification 2012
Note :-
The above information has been taken from the website of respective university/institute. Sanmacs India is no way responsible for the authenticity of data provided here in. For any discrepancy, the student should contact respective university/institute.