August 01, 2018

Brief about Book Library Data Warehouse System

Topic: BOOK LIBRARY
Subject: DATA WAREHOUSE
Prepared by: Sumit

Q1) Identify the business processes of interest to senior management in the industry (domain) allocated to your group.
Answer)
Major libraries have large collections and circulation. Managing libraries electronically has resulted in the creation and management of large library databases, Library to the students and teachers who are cooperating in this e-learning environment.

Below are some of the business processes of interest to senior management:
  • Variety of Books: Need to better understand what books customers wanted and were willing to pay for. 
  • Fund the Books: Need to change its costs and cash flow so that the book library could continue to operate. 
  • Make Library Reliable: It has to be a library that has its customers to their wanted books on-time.
  • Book Borrowing
A crucial part of a library is the human intermediary the librarian. This intermediary connects the users to the information needed and can assist with advice about using the information retrieval systems and working with information.

Q2) List some questions that would be raised by senior management for improving the business process.
Answer)
There are many questions that can be asked by senior management for improving the above business process.
Some of the questions that will be asked are :
  • When the item was collected?
  • Which librarian registered it?
  • What is the item about?
  • Which branch library the item was registered at?
Q3) To address the above-mentioned questions; propose a DW design (schema diagram).
Answer)
In general for a DW Design basically four main characteristics are used:
Step 1: Identify the Business Process
Step 2: Declare the Grain
Step 3: Identify the Dimensions
Step 4: Identify the Facts

Our Book Library case, the following are steps:
  1. Business Process: Book borrowing is the business process.
  2. Declare the Grain: The second step is to declare the grain of the business process. In the book borrowing process, we declare a transaction issued in library automation system as the grain, which means an item is borrowed by a patron.
  3. Identify the Dimensions: The third step is to choose the dimensions. Dimensions represent how people describe and inspect the data from the process. Following are dimension table I will be using :
    • The Patron-Dimension describes the library patron’s characteristics. The attributes of Patron-Dimension include the name of the patron, gender, occupation, patron type, department, college, and so on.
    • The Item-Dimension describes every item belonging to the library, and its attributes indicating what relating to this item, including call number, title, author, subject, classification, language, location, MARC, collecting source, and so on. 
    • The Location-Dimension describes branch libraries supervised by the city library, and its attributes include the name of the branch library, named of the district it is located and the name of region library.
    • The Date-Dimension describes every hour of one day, and its attributes include hour, date, week, month and year. 
  4. Identify the Facts: The fourth step is to identify the facts. In the case of book borrowing, we identify the fact to measure the number of books borrowed. We declared a transaction that an item was borrowed by a patron as the grain in the prior step. Thus, the number of books borrowed here is equal to one.
  • The star schema is perhaps the simplest data warehouse schema.
  • It is called a star schema because the entity-relationship diagram of this schema resembles a star, with points radiating from a central table. 
  • The center of the star consists of a large fact table and the points of the star are the dimension tables.
Star Schema for Library Book Borrowing:


Q4) List aggregations to improve the DW performance. Justify.
Answer)
  • Aggregates provide improvements in performance because of the significantly smaller number of records.
  • Aggregates allow quick access to Book Dimension data during reporting. Similar to database indexes, they serve to improve performance.
  • Aggregates are particularly useful in the following cases:
    • Executing and navigating in query data leads to delays if you have a group of queries
    • You want to speed up the execution and navigation of a specific query
    • You often use attributes in queries
    • You want to speed up reporting with specific hierarchies by adding a level of a specific hierarchy.
  • Aggregates are particularly useful in the following cases:
  • If the aggregate contains data that is to be evaluated by a query, the query data is read automatically from the aggregate.
  • Query: Total sales for books during the first week of December 2000 for location Mumbai.

Q5) List and justify any 5 metadata items that will be of interest to various stakeholders.
Answer)
  • Metadata means "data about data". 
  • Data that provides information about one or more aspects of metadata data is defined as; It is used to summarize the basic information about the data that can be tracked and can work with specific data.
  • Below are metadata items of various interest to stakeholders:
    • Purpose of the book
    • Time and date of issuing the book
    • Creator or author of the book
    • Location on a computer network where the book was issued.
    • Book quantity
    • Book quality
  • Below are metadata items of various interest to stakeholders:
Types of Meta Data:
  • Descriptive metadata is usually used for search and identification, such as searching and finding an object, such as title, author, topic, keyword, and publisher.
  • Administrative metadata provides information to help manage the source. Administrative metadata refers to the technical information, including file type, or when and how the file was created.
  • Structural metadata describes how components of an object are organized. An example of structural metadata will be how the pages are ordered to make chapters of a book.
Following are some key points that to be included in MetaData:

Definition of data warehouse − It includes the description of the structure of data warehouse. The description is defined by schema, view, hierarchy, derivative data definitions, and data mart locations and materials.

Operational Metadata − It includes currency of data and data lineage. The currency of the data means that the data is active, stored or pure, or not. The genealogy of the data means the history of the migrated data and the changes applied to it.

Business metadata − It has the data ownership information, business definition, and changing policies

July 29, 2018

Analytics Skills - Technology DataStage-L1


Question.
Continue action if a lookup on a link fails

a)Drops the row and Job fails
b)Drops the row and will skip next lookup
c) Drops the row and will skill all further lookups
d) Drops the row and continues with the next lookup

Answer: Drops the row and continues with the next lookup

Question.
Fail action if a lookup on a link fails

a) Causes the job to reject records
b) Causes the job to issue a fatal error and continues with next lookup
c) Causes the job to issue a fatal error and stop
d) Causes the job to issue a fatal error and falls records from subsequent lookup

Answer: Causes the job to issue a fatal error and stop

Question.
Which phase is used to compute sum of salary collected together by deptno?
a)Join
b)Aggregate
c)Merge
d) Copy

Answer: Aggregate

Question.
Which activity is used to execute shell scripts or bat files?

a) Wait for File activity
b) Execute Command activity
c) Program activity
d) Run Program activity

Answer: Execute Command activity

Question.
Change_Code value of three of Change Capture Stage in DataStage represents

a) Copy
b) Delete
c) New
d) Edit

Answer: Edit

Question.
Change_Code value of zero of Change Capture Stage in DataStage represents

a) Edit
b) Delete
c) New
d) Copy

Answer: Copy

Question.
Change_Code value of one of Change Capture Stage in DataStage represents
a) Copy
b) Delete
c) Edit
d) New

Answer: New

Question.
Which one is used to remove duplicates in data?

a) Unique property set in Join Stage
b)Unique property set in Merge Stage
c)Remove Duplicate Stage
d)Dedup Sort Stage

Answer: Remove Duplicate Stage

Question.
Which of the below stages are used to achieve Union all operation on input data sources?

a)Join
b) Funnel
c) Lookup
d) Filter

Answer: Funnel

Question.
Which of the following is not type of view in Datastage Director?

a)Job View
b)Log View
c)Status View
d) Parallel View

Answer: Parallel View

Question.
Which one of the following can be used to schedule jobs

a) DataStage Director
b)DataStage Designer
c)DataStage Administrator
d) Data Stage Exporter

Answer: DataStage Director

Question.
Which one is used to create workflows in DataStage?

a) Parallel Jobs
b) Sequence Jobs
c) Server Jobs
d) Workflow Activity

Answer: Sequence Jobs

Question.
Change_Code value of two of Change Capture Stage in DataStage represents

a) Copy
b) Delete
c) New
d) Edit

Answer: Delete

Question.
Change Capture stage is

a) File Stage
b) Database Stage
c) Processing Stage
d) Miscellaneous Stage

Answer: Processing Stage

Question.
Which Stage allows you to specify several Reject links?

a) Lookup
b)Merge
c)Join
d) filter

Answer: Merge

Question.
If two rows in Change Capture have same key columns, you can match the columns in the rows to understand if one is an modified copy of the other.

a) Key Column
b) Value Columns
c)After Data Column
d)Before Data Column

Answer: Value Columns

Question.
Which action will wait for file to seem in a folder?

a)Wait for file activity
b)Job Activity
c)Execute Command Activity
d)Sequential file Stage activity

Answer: Wait for file activity

Question.
Which of the below phases are used to Restrict Data created on Where Clause Predicates?

a)Join
b)Funnel
c) Filter
d) Copy

Answer: Filter

Question.
using Copy stage

a)Order of columns can be changed but data type of columns cannot be changed
b)Order of columns cannot be changed but data type of columns can be changed
c)Order of columns can be changed and data type of columns can be changed
d) both the Order of columns and data type of columns cannot be changed

Answer: Order of columns can be changed but data type of columns cannot be changed

Question.
Which option will send record with null values when a lookup failure happens

a) Reject
b) Drop
c) Fail
d) Continue

Answer: Continue

July 25, 2018

Analytics - Hadoop L1


Question.
Which of the following are true about Hadoop?
Open Source
Distributed Processing Framework
Distributed Storage Framework
All of these

Answer: All of these

Question.
Which of the following are false about Hadoop?
Hadoop works in Master-Slave fashion
Master & Slave both are worker nodes
User submit his work on master, which distribute it to slaves
Slaves are actual worker node

Answer: Master & Slave both are worker nodes

Question.
What is a Metadata in Hadoop?
Data stored by user
Information about the data stored in datanodes
User information
None of these

Answer: Information about the data stored in datanodes

Question.
What is a Daemon?
Process or service that runs in background
Applications submitted by user
Web application running on web server
None of these

Answer: Process or service that runs in background

Question.
All of the following accurately describe Hadoop EXCEPT?
a. Batch processing
b.Open-source
c. Distributed computing
d. Real-time

Answer: Real-time

Question.
All of the following is a core component of Hadoop EXCEPT?
a. Hive
b. HDFS
c. MapReduce
d. YARN

Answer: Hive

Question.
Hadoop is a framework that uses a variety of related tools. Common tools included in a typical implementation include:
a. MapReduce, HDFS, Spool
b. MapReduce, MySQL, Google Apps
c. Cloudera, HortonWorks, MapR
d. MapReduce, Hive, Hbase

Answer: MapReduce, Hive, Hbase

Question.
Which of the following can be used to create workflows when multiple MapReduce and Pig programs need to be executed?
a. Sqoop
b. Zookeeper
c. Oozie
d. Hbase

Answer: Oozie

Question.
Which of the following can be used to transfer bulk data between Hadoop and structured databases
a. Sqoop
b. Hive
c. Pig
d. Spark

Answer: Sqoop

Question.
How many single points of failure does a High Availability HDFS architecture have?
a. 0
b. 1
c. 2
d. 3

Answer: 0

Question.
If a file of size 300MB needs to be stored in the HDFS (block size=64MB, replication factor=2), how many blocks are created for this file in the HDFS?
a. 10
b. 11
c. 12
d. 15

Answer: 10

Question.
What is not a default value for a data block size in the HDFS?
a. 64MB
b. 128MB
c. 512MB
d. 256MB

Answer: 512MB

Question.
Which of the following architectures best describes the HDFS architecture?
a. High Availability
b. Master-Slave
c. Connected
d. Peer

Answer: Master-Slave

Question.
Which of the following is a master process in the HDFS architecture?
a. Datanode
b. JobTracker
c. Namenode
d. Secondary Namenode

Answer: Namenode

Question.
Which of the following is true about Hadoop?

Before storing data we need to specify the schema
We will loss data if one data node crashes
We can add n no of nodes in cluster on the fly (n ~ 15000)
Data is firstly processed on master then on slaves

Answer: We can add n no of nodes in cluster on the fly (n ~ 15000)

Question.
Choose the correct statement?

Master assigns work to all the slaves
We cannot edit data once written in Hadoop
Client need to interact with master first, as it is the single place where all the meta data is available
All of these

Answer: All of these

Question.
Which of the following is the essential module of HDFS?
Node Manager
Resource Manager
DataNode
ALL of the above

Answer: DataNode

Question.
Which of the below is NOT a kind of metadata in NameNode?

Block locations of files
List of files
File access control information
No. of file records

Answer: No. of file records

Question.
Which statement is true about DataNode?

It is the actual worker node that saves and stores meta data.
It is the slave node that saves and stores metadata.
It is the Master node that saves and stores actual data.
It is the slave node that saves and stores actual data.


Answer: It is the slave node that saves and stores actual data.

Question.
Is the Secondary NameNode is the Backup node?
TRUE
FALSE

Answer: FALSE

Question.
Which of the below is programming model planned for handling out large capacities of data in parallel by dividing the effort into a set of independent tasks.

MapReduce
Hive
Pig
HDFS

Answer: MapReduce

Question.
Mappers sorted output is Input to the-
Reducer
Mapper
Shuffle
All of the mentioned

Answer: Reducer


Question.
Which of the following generate intermediate key-value pair?
Reducer
Mapper
Combiner
Partitioner

Answer: Mapper

Question.
What is the major advantages of storing data in block size 128MB?
It saves disk seek time
It saves disk processing time
It saves disk access time
It saves disk latency time

Answer: It saves disk seek time

Question.
Role of Partitioned in Map Reduce Job is :

a) To partition input data into equal parts
b) Distribute data among available reducers
c) To partition data and send to each mapper
d) Distribute data among available mappers

Answer:  Distribute data among available reducers

Question.
Which of the following is Single point of Failure?
NameNode
Secondary NameNode
DataNode
None of above

Answer: NameNode

Question.
Apache Hbase is

a) Column family oriented NoSQL database
b) Relational Database
c) Document oriented NoSQL database
d) Not part of Hadoop eco system

Answer: Column family oriented NoSQL database

Question.
Which of the following is a Table Type in Hive ?

a)Managed Table
b)Local Table
c)Persistent Table
d)Memory Table

Answer: Managed Table

Question.
Which of the following is a demon process in Hadoop?

a) NameNode
b) JobNode
c) taskNode
d) mapreducer

Answer: NameNode

Question.
Information about locations of the blocks of a file is stored at ________

a)data nodes
b)name node
c)secondary name node
d)job tracker

Answer: name node

Question.
Apache Sqoop is used to

a) Move data from local file system to HDFS
b) Move data from streaming sources to HDFS
c) Move data from RDBMS to HDFS
d) Move data between Hadoop Clusters

Answer: Move data from RDBMS to HDFS

Question.
In a Map Reduce Program, role of combiner is

a) To combine output from multiple map tasks
b) To combine output from multiple reduce tasks
c) To merge data and create a single output file
d) To aggregate the output of each map task

Answer: To aggregate the output of each map task

Question.
Hive External tables store data in

a) default Hive warehouse location in HDFS
b) default Hive warehouse location in Local file system
c) a custom location in HDFS
d) a custom location in local file system

Answer: a custom location in HDFS

Question.
MapReduce programming model is ________

a)Platform Dependent but not language-specific
b)Neither platform- nor language-specific
c)Platform independent but language-specific
d)Platform Dependent and language-specific

Answer: Neither platform- nor language-specific

Question.
Hive generates results using

a) DAG of Map Reduce Jobs
b) sequencial processing of files
c) MySQL query engine
d) List processing

Answer: DAG of Map Reduce Jobs

Question.
Clients access the blocks directly from ________for read and write

a)data nodes
b)name node
c)secondarynamenode
d)primary node

Answer: data nodes

Question.
In Apache Pig, a Data Bag stores

a) Set of columns
b) set of columns with the same data type
c) set of columns with different data type
d) Set of tuples

Answer: set of columns with the same data type

Question.
You can execute a Pig Script in local mode using the following command

a) pig -mode local
b) pig -x local
c) pig -run local
d) pig -f

Answer: pig -x local

Question.
Default bock size in HDFS is____________

a)128 KB
b)64 KB
c)32 MB
d)128MB

Answer:128MB

Question.
Apache Flume is used to

a) Move data from RDBMS to HDFS
b) Move data from HDFS to RDBMS
c) Move data from One HDFS Cluster to another
d) Move data from Streaming source to HDFS

Answer: Move data from Streaming source to HDFS

Question.
Default data field delimiter used by Hive is

a) Ctrl-a character
b) Tab
a) Ctrl-b character
d) Space

Answer: Ctrl-a character

Question.
What are the characteristics of Big Data?

a)volume, quality, variety
b)volume,velocity, variety
c)volume, quality, quantity
d)qantity and quality only

Answer: volume,velocity, variety

Question.
Which is optional in map reduce program?

a)Mapper
b)Reducer
c)both are optional
d)both are mandatory

Answer: Reducer

Question.
In Hive tables, each table partition data is stored as ?

a) files in separate folders
b) multiple files in same folder
c) a single file
d) multiple xml files

Answer: files in separate folders

Question.
What is the default storage class in Pig Called ?

a)TextStorage
b)DefaultStorage
c)PigStorage
d)BinaryStorage

Answer: PigStorage

Informatica PowerCenter - MCQS


Question.
In a Workflow , you need to run an operating system script between two sessions. How can you best accomplish this?
a)Call from Post SQL
b)Call the script from a Command task
c)Use a custom transformation
d)Use Event raise and Event wait tasks

Answer: Call the script from a Command task

Question.
An Active Transformation :

a)Does not change the number of rows that pass through it
b)Represents the data flow between sources and targets
c)Can change the number of rows that pass through it
d)Creates a target definition based on a source definition

Answer: Can change the number of rows that pass through it

Question.
NetSal= bassic+hra. In which transformation we can achieve this?

Lookup
Expression
Filter
Aggregator

Answer: Expression

Question.
How many return ports are allowed in unconnected lookup transformation ?

3
2
1
Any number

Answer: 1

Question.
Which one is not correct about filter transformation?

Act as a 'where' condition
Can't passes multiple conditions
Act like 'Case' in pl/sql
If one record does not match condition, the record is blocked

Answer: Act as a 'where' condition

Question.
Are user-defined events supported in PowerCenter workflows?

a)No, because only File Wait events are supported in the Event-Wait task.
b)Yes, but only in workflows containing worklets.
c)No, because Event-raise tasks do not support user-defined events.
d)Yes, using a combination of Event-Raise and Event-Wait tasks

Answer: Yes, using a combination of Event-Raise and Event-Wait tasks

Question.
Every mapping must contain which of the following components :

a)Source definition
b)Target definition
c)Source definition, Target definition, Transformation, Links
d)Source and Target definition

Answer: Source definition, Target definition, Transformation, Links

Question.
Which transformation is used to combine data from different sources?

Lookup
Joiner
Union
Expression

Answer: Union

Question.
Aggregator Transformation performs :

a)calculation of values in a single row
b)concatenation of columns/ports in a row
c)aggregate calculations, such as averages and sums
d)calculations on a row-by-row basis

Answer: aggregate calculations, such as averages and sums

Question.
For triming leading & trailing spaces of a in Informatica the following function is used

a)TRIM(VALUE)
b)LTRIM(VALUE)
c)LTRIM(RTRIM(VALUE))
d)AllTRIM(VALUE)

Answer: LTRIM(RTRIM(VALUE))

Question.
In Joiner Transformation, if Master Outer Join type is used, it will :

a)Keep all rows of data from the detail source and the matching rows from the master source, and discards the unmatched rows from the master source
b)Discard all rows of data from the master and detail source that do not match, based on the condition
c)Keep all rows of data from the master source and the matching rows from the detail source, and discards the unmatched rows from the detail source
d)Keep all rows of data from both the master and detail sources

Answer: Keep all rows of data from the detail source and the matching rows from the master source, and discards the unmatched rows from the master source

Question.
Chrological details of workflow tasks can be viewed in

a)Gantt chart view
b)task view
c)Workflow view
d)we cant view chronological details in worklow monitor

Answer: Gantt chart view

Question.
Which transformation only works on relational source?

Lookup
Joiner
Union
SQL

Answer: SQL

Question.
A transformation that does not change the number of rows that pass through it, is called :

a)Passive Transformation
b)Connected Transformation
c)Mapplet
d)Source Qualifier

Answer: Passive Transformation

Question.
which command is used to execute workflow tasks from the command line

REPCMD
PMCMD
CMD
PMREP

Answer: PMCMD

Question.
Which of the following transformations does not have variable port ?

Filter
Expression
Rank
Aggregator

Answer: Filter

Question.
A set of instructions to execute tasks such as sessions, emails, shell commands etc is called :

a)Mapping
b)Load Balancer
c)Workflow
d)Transformation

Answer: Workflow

Question.
Which of the following is a type of workflow task?

Ranking
Database Bulk Loading
sorting
Event Raise

Answer: Event Raise

Question.
Which of the following is not a Active Transformation ?

Router
Filter
Sequence Generator
Update Strategy

Answer: Sequence Generator

Question.
How to execute PL/SQL script from Informatica mapping?

Lookup
Store Procdure
Expression
Non of these
Answer: Store Procdure

Question.
What is a mapplet?

Combination of reusable transformation.
Set of transformations and it allows us to reuse
None of these

Answer: Set of transformations and it allows us to reuse

Question.
Which one is not a type of  fact?

Semi-aditive
Additive
Confirm fact
Not additive

Answer: Confirm fact

Question.
Which one is not a type of dimension ?

Conformed dimension
Rapidly changing dimension
Junk dimension
Degenerated dimension

Answer: Rapidly changing dimension

Question.
What does reusable transformation mean?

It can be re-used across repositories
I can only be used in mapplet.
It can use in multiple mapping only once
It can use in multiple mapping multiple times

Answer: It can use in multiple mapping multiple times

Question.
Which one is not an option in update strategy?

dd_reject
4
2
dd_delete

Answer: 4 

July 23, 2018

Brief about IRCTC Proposed System - OOAD

Topic: IRCTC
Subject: OOAD
Task: Assignment
Subject: Object Oriented Analysis and Design
Prepared by: Srinivas

REQUIREMENT ANALYSIS

Objective and scope of the Project:
   
The objective of the Project
In order to overcome the drawback of the current IRCTC website, we have tried to offer passenger a growing facility for booking tickets which is not provided by IRCTC.

The scope of the Project
  • To understand the current system and implement the software with the current system.
  • To execute the software without any problems or error without creating any complications.
  • Automating the set.
PROPOSED SYSTEM
  • The proposed system is fully computerized, making ticket booking easier and cheaper.
  • It provides the user with more options for traveling.
  • According to the user's trust, you can book tickets between the source and the destination by breaking the journey in two half.
  • The price is calculated on the basis of the end to end distance rather than calculating for two different transaction.
  • The second part of the journey should start within 48 hours of the first part of the journey. The ticket for the second part of the trip is invalid without the first part of the trip.
  • A senior citizen is provided with discount only if they have valid ID proof of their age. 
  • The ticket can be printed and downloaded in pdf format.
 UML DIAGRAM:

https://www.waseian.com/2018/07/brief-about-irctc-proposed-system-ooad.html

USE CASE DIAGRAMS:

https://www.waseian.com/2018/07/brief-about-irctc-proposed-system-ooad.html

CLASS DIAGRAMS:


SEQUENCE DIAGRAM:

For Registration:

https://www.waseian.com/2018/07/brief-about-irctc-proposed-system-ooad.html

For the invalid user:
https://www.waseian.com/2018/07/brief-about-irctc-proposed-system-ooad.html

For the Valid user:
https://www.waseian.com/2018/07/brief-about-irctc-proposed-system-ooad.html

For Reservation:
https://www.waseian.com/2018/07/brief-about-irctc-proposed-system-ooad.html

For Cancellation:
https://www.waseian.com/2018/07/brief-about-irctc-proposed-system-ooad.html

ACTIVITY DIAGRAM

For Registration:
https://www.waseian.com/2018/07/brief-about-irctc-proposed-system-ooad.html

For Login:
https://www.waseian.com/2018/07/brief-about-irctc-proposed-system-ooad.html

For Reservation:
https://www.waseian.com/2018/07/brief-about-irctc-proposed-system-ooad.html

For Cancellation:
https://www.waseian.com/2018/07/brief-about-irctc-proposed-system-ooad.html

GRASP PATTERN 
  • GRASP defines General Responsibility Assignment Software Patterns.
  • A collection of general objected‐oriented design patterns related to assigning defining objects.
  • There are nine GRASP patterns, maybe some are already recognizable and some not:
  1. Creator 
  2. Information Expert (or just Expert) 
  3. Low Coupling 
  4. Controller 
  5. High Cohesion 
  6. Polymorphism 
  7. Pure Fabrication
  8. Indirection 
  9. Protected Variations.
CREATOR
•    This pattern generally avoids adding the coupling to a design.
•    When creation is a complex process or varies depending upon an input, often you’ll want to create using a different class implementing the GoF pattern Concrete Factory or Abstract Factory.

INFORMATION EXPERT
•    This is a general principle and probably the most used of any GRASP pattern.
•    This generally is key to loose coupling and high cohesion, but not always so. – Imagine a case where it is better to do better handheld data to preserve a large functional segmentation and support consolidation.
•    We're fully talking about the information organized by software objects, but if there are no relevant software classes, try the domain model.

LOW COUPLING
•    The higher coupling can lead to:
– More difficulty in understanding
– Changes propagating excessively
– More obstacles to code reuse
•    Less coupling often goes into the hands with a high cohesion
•    Consider this principle with every design decision.
•   The more unbalanced the class joined to, the more concerning the connection – Example. consider a language’s ordinary library vs. a class a colleague just defined a couple days ago.

CONTROLLER
•   A controller tries to organize the work without doing too much of it itself
•   A simple example of this is that UI substances shouldn’t execute business logic; there are further classes for that.
•    The controller in the Model‐View‐Controller (MVC) architecture is effectively the same thing. – This, or its variation Model‐View‐Presenter, is frequently used in web applications
HIGH COHESION
•    Very similar to Low Coupling
– Often related (but not always)
– Should be considered in every design decision.
•    Lower cohesion almost always means:
•    Low cohesion suggests that more delegation should be used.

POLYMORPHISM
•    With respect to implementation, this usually means the use of a super (parent) class or interface – Coding to an interface is generally preferred and avoids committing to a particular class hierarchy.

PURE FABRICATION
•    In other words, getting class concepts from a good domain model or real‐life objects won’t always work out well!
•   An example of a possible pure construction class: Determined Storage – May very well not be in the domain model, May very well not map to an actual‐lifetime object – But it might be the answer to attain our goals of low coupling / high cohesion while still taking a clear accountability

INDIRECTION
•   Frequently an indirection intermediary is also a pure construction. – The Determined Storage example could very well be an indirection between a Sale class and the database
•    Goff Pattern Adapter, Bridge, Facade, Observer, and Arbitrator all accomplish it.
•    The main benefit is lower coupling.

PROTECTED VARIATIONS
•    The solution "interface" is in the general sense; But to implement the solution you often want to build an interface programming (in Java, for example)!
•    Benefits: – Easy to extend functionality at PV points – Lower coupling – Implementations can be updated without affecting clients – Reduces the impact of change
•   Similar to the concealment of open-ended theory or information (not to hide data)
•    “Novice developers tend to brittle designs, intermediate developers point to highly fancy and flexible, generalized (which is never used in any way). Expert designers choose with insights."”

Note: The above-proposed system/Assignment for OOAD has been prepared by Srinivas which I have posted here. Give your blessings to him whoever find helpful for them :)