10 Most Common Business Analyst Interview Questions

Reading Time: 4 minutes

Preparing for a Business Analyst Job Interview? Here are a few tips and the most useful and common business analyst interview questions that you might face. 

Before attending an interview for a business analyst position, one should be through about their previous experience in the projects handled and results achieved. The types of questions asked generally revolve around situational and behavioural acumen. The interviewer would judge both knowledge and listening skills from the answers one presents. 

The most common business analyst interview questions are:

 

1. How do you categorize a requirement to be a good requirement?

A good requirement is the one that clears the SMART criteria, i.e., 

Specific – A perfect description of the requirement, specific enough to be easily understandable

Measurable – The requirement’s success is measurable using a set of parameters

Attainable – Resources are present to achieve requirement success

Relevant – States the results that are realistic and achievable

Timely – The requirement should be revealed in time 

business analyst interview questions

 

2. List out the documents used by a Business Analyst in a project?

The various documents used by a Business Analyst are:

a. FSD – Functional Specification Document

b. Technical Specification Document

c. Business Requirement Document 

d. Use Case Diagram

e. Requirement Traceability Matrix, etc.

 

3. What is the difference between BRD and SRS?

SRS (Software Requirements Specifications) – is an exhaustive description of a system that needs to be developed and describes the software – user interactions. While a BRD (Business Requirements Document) is a formal agreement for a product between the organization and the client. 

The difference between the two are:

business analyst interview questions

 

4. Name and briefly explain the various diagrams used by a Business Analyst.

Activity Diagram – It is a flow diagram representing the transition from one activity to another. Here activity is referred to the specific operation of the system.

Data Flow Diagram – It is a graphical representation of the data flowing in and out of the system. The diagram depicts how data is shared between organizations

Use Case Diagram – Also known as Behavioural diagram, the use case diagram depicts the set of actions performed by the system with one or more actors (users).

Class Diagram – This diagram depicts the structure of the system by highlighting classes, objects, methods, operations, attributes, etc. It is the building block for detailed modelling used for programming the software.

Entity Relationship Diagram – It is a data modelling technique and a graphical representation of the entities and their relationships. 

Sequence Diagram – It describes the interaction between the objects. 

Collaboration Diagram – It represents the communication flow between objects by displaying the message flow among them.

 

5. Name different actors in a use case diagram?

Broadly, there are two types of actors in a use-case:

a. Primary Actors – Start the process

b. Secondary Actors – assist the primary actor

They can further be categorized as:

i. Human

ii. System

iii. Hardware

iv. Timer

 

6. Describe ‘INVEST’.

The full form of INVEST is Independent, Negotiable, Valuable, Estimable, Sized Appropriately, Testable. With this process, the technical teams and project managers to deliver quality products or services.

 

7. What is Pareto Analysis

Also known as the 80/20 rule, Pareto Analysis is an effective decision-making technique for quality control. As per this analysis, it is inferred that 80% effects in a system are a result of 20% causes, hence the name 80/20 rule.

 

8. Describe the Gap Analysis.

It is utilized to analyze gaps between the existing system and its functionalities against the targeted system. The gap is inferred to the number of changes and tasks that need to be brought in to attain the targeted system. It compares performance between the present and the targeted functionalities.

 

9. Name different types of gaps that could be encountered while Gap Analysis

There are mainly four types of gaps:

a. Performance Gap – Gap between expected and actual performance

b. Product/ Market Gap – Gap between budgeted and actual sales numbers

c. Profit Gap – Variance between targeted and actual profit

d. Manpower Gap – Gap between required and actual strength and quality of the workforce in the organization

 

10. What are the various techniques used in requirement prioritization?

Requirement prioritization, as the name suggests, is a process of assigning priorities to the requirements based on business urgency in different schedules, phases, and cost among others.

The techniques for requirement prioritization are:

a. Requirements Ranking Method

b. Kano Analysis

c. 100 Dollar Method

d. MoSCoW Technique

e. Five Whys

 

Stay tuned to this page for more such information on interview questions and career assistance. If you are not confident enough yet and want to prepare more to grab your dream job as a Business Analyst, upskill with Great Learning’s PG program in Business Analytics and Business Intelligence, and learn all about Business Analytics along with great career support.

15 Most Common Data Science Interview Questions

Reading Time: 5 minutes

Data Science is a comparatively new concept in the tech world, and it could be overwhelming for professionals to seek career and interview advice while applying for jobs in this domain. Also, there is a need to acquire a vast range of skills before setting out to prepare for data science interview. Interviewers seek practical knowledge on the data science basics and its industry-applications along with a good knowledge of tools and processes. Here is a list of 15 most common data science interview questions that might be asked during a job interview. Read along.

 

1. How is Data Science different from Big Data and Data Analytics?

Ans. Data Science utilizes algorithms and tools to draw meaningful and commercially useful insights from raw data. It involves tasks like data modelling, data cleansing, analysis, pre-processing etc. 

Big Data is the enormous set of structured, semi-structured, and unstructured data in its raw form generated through various channels.

And finally, Data Analytics provides operational insights into complex business scenarios. It also helps in predicting upcoming opportunities and threats for an organization to exploit.

data science interview questions
How is Data Science different from Big Data and Data Analytics?

2. What is the use of Statistics in Data Science?

Ans. Statistics provides tools and methods to identify patterns and structures in data to provide a deeper insight into it. Serves a great role in data acquisition, exploration, analysis, and validation. It plays a really powerful role in Data Science.

 

3. What is the importance of Data Cleansing?

Ans. As the name suggests, data cleansing is a process of removing or updating the information that is incorrect, incomplete, duplicated, irrelevant, or formatted improperly. It is very important to improve the quality of data and hence the accuracy and productivity of the processes and organization as a whole.

 

4. What is a Linear Regression?

The linear regression equation is a one-degree equation of the form Y = mX + C and is used when the response variable is continuous in nature for example height, weight, and the number of hours. It can be a simple linear regression if it involves continuous dependent variable with one independent variable and a multiple linear regression if it has multiple independent variables. 

 

5. What is logistic regression?

Ans. When it comes to logistic regression, the outcome, also called the dependent variable has a limited number of possible values and is categorical in nature. For example, yes/no or true/false etc. The equation for this method is of the form Y = eX + e – X

 

6. Explain Normal Distribution

Ans. Normal Distribution is also called the Gaussian Distribution. It has the following characteristics:

a. The mean, median, and mode of the distribution coincide

b. The distribution has a bell-shaped curve

c. The total area under the curve is 1

d. Exactly half of the values are to the right of the centre, and the other half to the left of the centre

 

7. Mention some drawbacks of the linear model

Ans. Here a few drawbacks of the linear model:

a. The assumption regarding the linearity of the errors

b. It is not usable for binary outcomes or count outcome

c. It can’t solve certain overfitting problems

 

8. Which one would you choose for text analysis, R or Python?

Ans. Python would be a better choice for text analysis as it has the Pandas library to facilitate easy to use data structures and high-performance data analysis tools. However, depending on the complexity of data one could use either which suits best.

 

9. What steps do you follow while making a decision tree?

Ans. The steps involved in making a decision tree are:

a. Pick up the complete data set as input

b. Identify a split that would maximize the separation of the classes

c. Apply this split to input data

d. Re-apply steps ‘a’ and ‘b’ to the data that has been divided 

e. Stop when a stopping criterion is met

f. Clean up the tree by pruning

data science interview questions
Steps involved in making a Decision Tree

10. What is Cross-Validation? 

Ans. It is a model validation technique to asses how the outcomes of a statistical analysis will infer to an independent data set. It is majorly used where prediction is the goal and one needs to estimate the performance accuracy of a predictive model in practice.

The goal here is to define a data-set for testing a model in its training phase and limit overfitting and underfitting issues. The validation and the training set is to be drawn from the same distribution yo avoid making things worse.

 

11. Mention the types of biases that occur during sampling?

Ans. The three types of biases that occur during sampling are:

a. Self-Selection Bias

b. Under coverage bias

c. Survivorship Bias

 

12. Explain the Law of Large Numbers

Ans. The ‘Law of Large Numbers’ states that if an experiment is repeated independently a large number of times, the average of the individual results is close to the expected value. It also states that the sample variance and standard deviation also converge towards the expected value.

 

13. What is the importance of A/B testing

Ans. The goal of A/B testing is to pick the best variant among two hypotheses, the use cases of this kind of testing could be a web page or application responsiveness, landing page redesign, banner testing, marketing campaign performance etc. 

The first step is to confirm a conversion goal, and then statistical analysis is used to understand which alternative performs better for the given conversion goal.

 

14. What are over-fitting and under-fitting?

Ans. In the case of over-fitting, a statistical model fails to depict the underlying relationship and describes the random error and or noise instead. It occurs when the model is extremely complex with too many parameters as compared to the number of observations. An overfit model will have poor predictive performance because it overreacts to minor fluctuations in the training data.

In the case of underfitting, the machine learning algorithm or the statistical model fails to capture the underlying trend in the data. It occurs when trying to fit a linear model to non-linear data. It also has poor predictive performance.

 

15. Explain Eigenvectors and Eigenvalues

Ans. Eigenvectors depict the direction in which a linear transformation moves and acts by compressing, flipping, or stretching. They are used to understand linear transformations and are generally calculated for a correlation or covariance matrix. 

The eigenvalue is the strength of the transformation in the direction of the eigenvector. 

 

Stay tuned to this page for more such information on interview questions and career assistance. If you are not confident enough yet and want to prepare more to grab your dream job in the field of Data Science, upskill with Great Learning’s PG program in Data Science Engineering, and learn all about Data Science along with great career support.