Parallel computing finds its place in the world of computation long back since 1842 with the first ever design of Analytical Engine by English mathematician and computer pioneer Charles Babbage. It took lots of brainstorming by the great researchers like John Cocke, Daniel Slotnick, Gene Amdahl and the big organizations like IBM, Burroughs Corporation and Honeywell for the next century, to coin various laws and set a background for today's parallel computing platform. Since then, parallel computing has successfully replaced the notion of sequential computing in terms of instruction level, task level, thread level parallelism. From the use of SISD programs to MIMD programs, the evolution of parallelism in computing has been phenomenon. When a job is divided into several parts where each part is capable of running with others in parallel, then this not only increases the overall speed of execution of job, but also ascertains the optimal utilization of computer resources like memory, processor time and the like. On this basis, the computers can be categorized on hardware level at which they can support parallelism which include multi core/ many core computers, multi processor computers, clusters, grids and massively parallel supercomputers. With the concept of parallelization concepts like concurrency control, deadlocks, synchronization of tasks have been there. Lot of work is going on since years in addressing these concepts in virtue of parallel computing. This chapter is focusing on the history, background, types of parallel computing, the memory architectures, message passing, concurrency control, deadlocks and their possible solution in parallel computing.