Top Deloitte Interview Questions and Recruitment Tips for 2021 - IQCode

About Deloitte

Deloitte is a global professional services network with presence in more than 150 countries and territories worldwide. Headquartered in London, England, it is one of the Big Four accounting firms and the world's largest professional services network in terms of revenue and number of professionals. Deloitte provides audit, consulting, financial advisory, risk management, tax, and related services to its clients.

To obtain a job at Deloitte, one needs to have precision, professionalism, and exceptional competence. Being well-informed, composed, and capable of handling cutting-edge technology during the job interview are all key components to a successful experience with one of the largest privately held corporations in the world. If a candidate has the desire, talent, and ability to keep up with the latest technology, Deloitte is the place to be. The eligibility requirements for the job are listed below.

Eligibility Criteria

  • B.E./B.Tech in any discipline (CSE/ECE/IT/EEE/TELECOM/M.E/CIVIL, etc.) is required.
  • M.Sc. Computer Science and Information Technology graduates are eligible to apply. Graduates from the MCA program are also eligible to apply.
  • A candidate must have more than 60% marks in the 10th and 12th grade (or must have completed a diploma).
  • A candidate must have a minimum of 60% marks in their graduation program.
  • A maximum interval of one year is allowed after the HSC (12th grade), but not after the SSC (10th grade) or between semesters of graduation.
  • Candidates should not have any outstanding backlogs at the time of the selection process.

If a candidate meets the above requirements, they can apply for the Deloitte Placement Process, as described above.

Deloitte Recruitment Process

Interview Process

Code: N/A

The interview process for Deloitte involves multiple rounds of interviews, including both technical and behavioral interviews. The technical interviews will focus on the candidate's knowledge and experience in their field, whereas the behavioral interviews assess the candidate's problem-solving skills, communication skills, and teamwork capabilities. Deloitte may also conduct a case study or a group exercise to evaluate the candidate's critical thinking abilities and their ability to work in a team. Overall, Deloitte's recruitment process is rigorous and competitive, but it is an excellent opportunity for hardworking and talented individuals.

Deloitte Technical Interview Questions for Freshers and Experienced

//Stored Procedure and Triggers differences in SQL

//Stored procedures are reusable blocks of code used to execute a set of statements. 
//Triggers are special types of stored procedures that are automatically executed 
//in response to certain events.

//Key differences between Stored Procedures and Triggers:

//1. Execution - Stored procedures are called explicitly by users, whereas triggers 
//   are executed automatically when certain events occur.
//2. Purpose - Stored procedures are designed for managing complex SQL operations, 
//   whereas triggers are used for auditing and ensuring data integrity.
//3. Control - Stored procedures allow more control over the flow of execution, 
//   whereas triggers are more limiting in terms of control over the flow of execution.
//4. Scope - Stored procedures operate on a wider scope, whereas triggers operate 
//   on a narrower scope - typically a single table.
//5. Return Values - Stored procedures can return values, whereas triggers cannot.

Understanding SMTP (Simple Mail Transfer Protocol) in Computer Networks

SMTP stands for Simple Mail Transfer Protocol and is a standard communication protocol used to transfer electronic mail from one user to another over the internet or other networks. SMTP works alongside other protocols like POP3 (Post Office Protocol v3) and IMAP (Internet Message Access Protocol) to handle email messages.

SMTP works by sending email messages from a mail client to a mail server, and then relaying them to the recipient's mail server, which can be accessed by the recipient's mail client. SMTP authentication is often used to ensure the security of the email exchange.

Overall, SMTP plays a critical role in facilitating the transfer of email messages across various networks and ensuring that they are delivered accurately and securely.

Differences between CSMA/CD and CSMA/CA in Computer Networks

CSMA/CD (Carrier Sense Multiple Access/Collision Detection)


CSMA/CA (Carrier Sense Multiple Access/Collision Avoidance)

are two different access methods utilized in computer networks.

The main difference between CSMA/CD and CSMA/CA is the way they handle collisions. CSMA/CD detects collisions, while CSMA/CA avoids them entirely.

In CSMA/CD, a host detects a collision by sensing carrier signals on the network while it is transmitting. If two hosts transmit at the same time, a collision occurs and both hosts stop transmitting. The hosts then wait a random amount of time before attempting to transmit again.

On the other hand, in CSMA/CA, hosts listen for carrier signals before transmitting. If a carrier signal is present, the host waits a random amount of time and retries until successful transmission. This helps to avoid collisions altogether as each host checks for carrier signals before transmitting.

In addition, CSMA/CA is used in wireless networks, whereas CSMA/CD is used in wired networks. This is because wireless networks rely on radio waves, which can interfere with each other resulting in collisions.

Overall, while both methods have their advantages and disadvantages, CSMA/CA is typically considered to be more efficient for wireless networks, while CSMA/CD is more efficient for wired networks.

Understanding Tunneling Protocol in Computer Networks

Tunneling protocol is a networking concept that allows data transmission between two networks through a third network. It involves encapsulating data packets from the original network inside a separate packet that can travel across the third network to reach the recipient network. This technique is commonly used in Virtual Private Network (VPN) connections, where encrypted data is sent from one network to another securely. It helps to ensure that data transmission is secure and private while traversing through multiple networks. Tunneling protocols include Point-to-Point Tunneling Protocol (PPTP), Layer 2 Tunneling Protocol (L2TP), and Secure Shell (SSH) Tunneling Protocol among others.

Pros and Cons of Star Topology in Computer Networks

Star topology is commonly used in computer networks for its simplicity and ease of maintenance, but it also has some drawbacks. Here are the pros and cons of star topology in computer networks:


  • Easy to install and configure
  • Easier to detect and troubleshoot connection issues than other topologies
  • Allows for high-speed data transfer between devices
  • Centralized management and control of the network


  • Dependent on the central hub or switch, which can become a single point of failure
  • Costly since it requires more cabling than other topologies
  • Bandwidth is shared among all devices, which can lead to slower network speeds as the number of devices increases
Note: It's important to weigh the pros and cons of star topology before implementing it in a network. Other factors such as the size of the network and the types of devices being used may also affect the choice of topology.

Discussion of the Physical Layer in the OSI Model for Computer Networks

The Physical Layer is the first layer in the OSI model and is responsible for the transmission and reception of unstructured raw data between devices on a computer network. This layer defines the electrical, mechanical, and procedural specifications for activating, maintaining, and deactivating physical connections for data transmission.

The physical layer is concerned with bit-level transmission and the conversion of the data into electric, optical, or radio wave signals. This layer also manages the physical characteristics of the transmission medium, including the wires, fiber optic cable, or radio frequencies used to transmit the data.

The physical layer is divided into two sublayers: the Physical Layer Convergence Protocol (PLCP) sublayer and the Physical Medium Dependent (PMD) sublayer. The PLCP sublayer is responsible for providing a common interface between the PMD sublayer and Data Link Layer above it. The PMD sublayer is responsible for transmitting the data on the physical medium, such as copper or fiber optic cables, wireless radio frequencies, or other transmission media.

Overall, the Physical Layer plays an important role in ensuring that data is accurately transmitted over a computer network.

Comparison of Hash Join, Sort Merge Join, and Nested Loop Join in DBMS

In DBMS, there are several join methods that can be used to combine tables. Three popular ones are hash join, sort merge join, and nested loop join.

Hash join involves creating a hash table from one table and using it to match rows from another table. This method requires a lot of memory, but can be very fast for large tables.

Sort merge join, on the other hand, involves sorting both tables by a join key and then merging them together. This method is slower than hash join, but requires less memory and can handle larger tables.

Nested loop join is the simplest join method and involves looping through each row in one table and comparing it to every row in the other table. This method is the slowest and is typically only used for small tables.

In summary, hash join is fast but requires a lot of memory, sort merge join is slower but requires less memory and can handle larger tables, and nested loop join is the simplest but slowest method and is only suitable for small tables.

Difference between Hierarchical and Network Database Models in DBMS

In a hierarchical database model, data is structured in a tree-like structure where each record or node has only one parent. This means that one child can have only one parent but one parent can have multiple children. This model is suited for data with a fixed schema and simple relationships between data.

On the other hand, the network database model allows records to have multiple parents. This means that each record can have multiple owners and can be accessed from multiple paths. This model is suited for complex data with many-to-many relationships between data.

In essence, the hierarchical model is simple and easy to understand but lacks flexibility, while the network model is more complex but offers more flexibility and is suited for complex data.

Explanation of DDL, DML, and DCL Statements in SQL

DDL (Data Definition Language):

These SQL statements are used for defining the database schema or structure, tables, and other database objects. Examples include


, etc.

DML (Data Manipulation Language):

The DML SQL statements allow you to manipulate data in the database. Examples include


, and others.

DCL (Data Control Language):

These SQL statements are responsible for managing access to the database by controlling user permissions and privileges. Examples include


, etc.

It is essential to have a good understanding of these three types of SQL statements to work with databases effectively.

Understanding the Concept of Kernel in Operating Systems

The kernel is the central component of an operating system that manages computer resources and provides services to system calls. It serves as an intermediary between the hardware and the user-level software.

The kernel is responsible for managing tasks such as memory management, file system handling, process management, and implementing security mechanisms. It contains essential code and data structures that allow the operating system to function correctly.

Kernel designs vary among operating systems, and they can be classified into monolithic kernels, microkernels, hybrid kernels, and exokernels. Each kernel type has its advantages and disadvantages depending on the specific requirements of the operating system.

Overall, the kernel is a critical component of the operating system that allows the system to run efficiently, provide useful services, and enhance security.

// example code to illustrate a system call using the kernel #include

int main() { char buffer[500]; printf("Enter a message: "); fgets(buffer, 500, stdin); printf("You entered: %s", buffer); return 0; }

The example code shows how a standard input system call using the kernel in an operating system. The kernel facilitates the input/output operations through system calls like fgets(), and it ensures that the system operates correctly.

H3 tag: Advantages and Disadvantages of Using Threads in the Context of an Operating System

Threads offer both advantages and disadvantages when used in the context of an operating system.


1. Increased responsiveness: With multiple threads, an application can respond to user input while performing other tasks in the background.

2. Resource sharing: Threads can share data and resources with each other, making it easier to collaborate on complex tasks.

3. Efficient use of resources: Threads are lighter weight than processes, so they can be created and destroyed more quickly. This makes them a more efficient use of system resources.

4. Improved performance: By breaking a task down into smaller units that can be executed concurrently, threads can improve overall system performance.


1. Complexity: Multithreaded applications are more complex to design and debug than single-threaded applications, as they require careful synchronization and coordination among threads.

2. Race conditions: If multiple threads access shared data simultaneously without proper synchronization, race conditions can occur, leading to unpredictable behavior.

3. Deadlocks: When multiple threads are waiting on each other to release resources, deadlocks can occur, causing the application to hang.

4. Overhead: Thread management requires additional overhead, including memory and CPU resources, which can impact system performance.

Code tag:

//Advantages of using threads in the context of an operating system
//1. Increased responsiveness
//2. Resource sharing
//3. Efficient use of resources
//4. Improved performance

//Disadvantages of using threads in the context of an operating system
//1. Complexity
//2. Race conditions
//3. Deadlocks
//4. Overhead 

public class ThreadAdvantagesDisadvantages {
   public static void main(String[] args) {
      //code implementation of advantages and disadvantages of threads in OS context

P tag: The use of threads in the context of an operating system has its own set of advantages and disadvantages. While it can increase responsiveness, facilitate resource sharing, and improve overall system performance, it can also lead to increased complexity, potential race conditions, deadlocks, and additional overhead, which can impact system performance. It is important to consider these factors when designing and implementing multithreaded applications in an operating system context.

Reference Counting and Memory Allocation in the Context of the Operating System

In general, reference counting can manage objects that are memory allocated in the context of the operating system by keeping track of the number of references to each object. However, there are certain situations where reference counting may fail to reclaim objects.

For example, if two objects reference each other and no other references exist, the reference count for each object will never reach zero, causing a memory leak. Additionally, if there is a cycle of objects referencing each other, reference counting will not be able to reclaim any of the objects involved in the cycle.

In such cases, other memory management strategies, such as garbage collection, may be necessary to avoid memory leaks and ensure efficient use of memory.

Why does a single serial port use interrupt-driven I/O, while a front-end processor like a terminal concentrator uses polling I/O in the context of OS?

In OS, the choice of using either interrupt-driven I/O or polling I/O depends on the device being used and its characteristics.

A single serial port typically has a low data transfer rate and infrequent data transmission, so using interrupt-driven I/O makes sense. In this case, the device generates an interrupt signal only when it has data to transfer.

On the other hand, a front-end processor like a terminal concentrator has multiple devices connected to it, and each device may have different data transfer rates and transmission frequencies. Moreover, the concentrator must manage all these devices simultaneously. Using polling I/O helps in this scenario because the front-end processor can poll each connected device in a round-robin fashion and check for data transfer readiness, ensuring efficient use of system resources.

Therefore, the selection of interrupt-driven I/O or polling I/O depends on the characteristics of the device and the system's overall design considerations.

Explanation of Transaction Atomicity in the Context of Operating Systems

In the context of operating systems, transaction atomicity refers to the concept of a transaction being completed as a single, indivisible unit, or not at all. It means that if a transaction consists of multiple steps, either all of these steps will be executed successfully, or none of them will be executed at all. This ensures that the system remains in a consistent state, even in the event of failures or errors during the transaction process.

To achieve transaction atomicity, operating systems typically use transaction processing systems that follow the ACID (Atomicity, Consistency, Isolation, Durability) properties. Atomicity is the first property of ACID, and it guarantees that a transaction will either complete successfully or roll back to its previous state if it fails at any step. This ensures that the data is not left in an inconsistent state and prevents corruption or loss of data.

In summary, transaction atomicity is a crucial concept within the context of operating systems as it helps ensure that the system remains consistent and that data is not lost or corrupted during the transaction process.

Difference between Declaration and Definition in C for Variables and Functions

In C programming, a declaration is a statement that tells the compiler about the name and type of a variable or function. It is used to inform the compiler that a variable or function exists and what its type is, but it does not allocate memory or provide the implementation of the variable or function.

On the other hand, a definition is a statement that provides the implementation or the value of a variable or function, along with its declaration. In case of a variable definition, memory is allocated to the variable, while in case of a function definition, the function body is written to provide its implementation.

Therefore, while a declaration is necessary to inform the compiler about the type and name of a variable or function, a definition is required to provide its implementation or allocate memory for it.

Different Storage Classes in C

In C, there are four different storage classes:


This is the default storage class for all local variables. It is automatically assigned to all variables declared within a function block.


This storage class is used to define local variables that are stored in the CPU registers instead of RAM, in order to speed up processing.


Static storage class is used to declare local variables that retain their value across different function calls. This means that the value of the variable is preserved across different function calls.


Extern storage class is used to give a reference of a global variable that is visible to ALL program files. Extern is used typically when there are multiple files, and some variables need to be used across different files.

Explaining Generational Garbage Collection in Java and its Popularity

Generational Garbage Collection is a technique used in Java to automatically manage memory utilization by identifying and discarding objects that are no longer needed by the program. It works by dividing the heap memory into different generations based on the lifespan of objects.

Young generation holds newly-created objects, while old generation holds objects that have survived multiple GC cycles. The permanent generation holds metadata about the program, such as class definitions.

This approach is popular because it significantly reduces the workload of the garbage collector, making garbage collection more efficient. By focusing on the most recently created objects, the young generation can be collected more frequently without affecting the performance of the program.

In contrast, the old generation is only collected when necessary, since objects that survive multiple cycles are less likely to become garbage. This results in a more balanced workload for the GC, and helps ensure consistent program performance.

Overall, Generational Garbage Collection is a key feature of Java that helps manage memory more efficiently and improves the performance of Java applications.

Access Specifiers in C++

In object-oriented programming, access specifiers are keywords used to define the scope and accessibility of class members (variables and functions).

C++ has three access specifiers:

  • Public: Public members can be accessed from anywhere in the program.
  • Private: Private members can only be accessed within the same class. They are not accessible from outside the class.
  • Protected: Protected members can be accessed within the same class and its subclasses (also known as child or derived classes). They are not accessible from outside the class hierarchy.

Access specifiers help to enforce encapsulation and maintain the integrity of the program by preventing unintended access or modification of class members.

Reverse K Elements in a Queue

Given an integer k and a queue of integers, reverse the order of the first k elements of the queue, leaving the other elements in the same relative order. The implementation should only use standard queue operations like Enqueue, Dequeue, Size, and Front.

queue<int> reverseKElements(queue<int> q, int k){
	if(q.empty() || k > q.size()){
		return q;
	stack<int> st;
	int count = 0;
	while(count < k){
	int n = q.size();
	while(count < n){
	return q;

Finding the Row with the Most 1s in a 2D Boolean Array

* This function takes in a two-dimensional boolean array with sorted rows
* and returns the row number that has the most 1s in the array.
* @param boolArr The boolean array to be searched.
* @return The row number with the most 1s in the array.
public int findRowWithMostOnes(boolean[][] boolArr) {
    int m = boolArr.length; // number of rows
    int n = boolArr[0].length; // number of columns
    int maxOnesRow = 0; // row number with the most 1s
    int maxOnesCount = 0; // count of 1s in row with most 1s
    // traverse the array row by row
    for (int i = 0; i < m; i++) {
        int onesCount = 0; // count of 1s in current row
        // traverse the current row
        for (int j = 0; j < n; j++) {
            // if current element is a 1, increment onesCount
            if (boolArr[i][j]) {
            } else {
                // since rows are sorted, we can break out early
        // if current row has more 1s than previous maxOnesCount,
        // update maxOnesCount and maxOnesRow
        if (onesCount > maxOnesCount) {
            maxOnesCount = onesCount;
            maxOnesRow = i;
    // return the row number with the most 1s
    return maxOnesRow; 

Understanding Sessions in PHP

In PHP, a session refers to an interaction between a server and a client that occurs over a period of time. This interaction involves the exchange of data between the two parties, with the server storing relevant data in a session variable that is accessible across multiple pages of a website.

Sessions are commonly used in web development to maintain user-specific data, such as login information, shopping cart contents, and preferences. A session is initiated when a visitor first accesses a website and ends when the user closes the browser or remains inactive for a specified period of time.

During a session, the server assigns a unique session ID to the user, which is used to track the user's activity throughout the website. This ID is stored as a cookie on the user's browser and is used to identify the user's specific session.

PHP provides built-in functions for managing sessions, such as session_start(), which initiates a new session or resumes an existing one, and session_destroy(), which deletes the current session data and ends the session. To access session data, the $_SESSION superglobal array can be used, which stores all session variables.

Overall, sessions are a powerful tool in PHP development, providing a convenient way to store and retrieve user-specific data across multiple pages.

Differences Between echo and print in PHP

In PHP, both `echo` and `print` functions are used to display output. However, there are some differences between these two functions:

  1. Syntax: `echo` can be used with or without parentheses, whereas `print` can be used only with parentheses.

  2. Return Value: `echo` does not return any value but only displays output, whereas `print` returns a value of 1 after displaying output.

  3. Speed: `echo` is faster than `print` because it does not return a value.

  4. Usage: `echo` is commonly used for simple output statements, while `print` is used for more complex output statements that require a value to be returned.

// example usage of echo and print

$name = "John";

// using echo
echo "Hello, " . $name; // Hello, John

// using print
print("Welcome, " . $name); // Welcome, John

Man-in-the-Middle Attack in the Context of Cybersecurity

A Man-in-the-Middle (MITM) attack is a type of cyber attack where the attacker intercepts communication between two parties and secretly alters or steals sensitive information. The attacker does this by positioning themselves between the two parties such that each party believes that they are communicating directly with each other, when in fact, all communication is being intercepted by the attacker. This type of attack can result in compromised data integrity, confidentiality, and availability. MITM attacks are commonly used to steal login credentials, financial information, and to perform other malicious actions. It is important to use secure communication channels and to keep software up to date in order to prevent and detect MITM attacks.

Discussing Honeypots in the Context of Cyber Security

Honeypots are decoy computer systems that are designed to attract and trap attackers attempting to infiltrate a network. They can be used for various purposes, including detecting and analyzing ongoing attacks, gaining insight into the attacker's methods, and mitigating the damage done by attackers.

Since honeypots are not legitimate systems, any traffic or activity directed towards them is inherently suspicious. This makes it easier to identify and investigate malicious activity, which can help defenders to protect their actual systems.

However, honeypots must be set up properly and managed carefully to be effective. They must be isolated from the rest of the network and configured to mimic a real system convincingly enough to fool attackers. Additionally, honeypots should be monitored closely to detect any attempts to compromise them and to ensure that they are not used to launch attacks on other systems.

Overall, honeypots can be a valuable tool in a cyber security professional's arsenal, but they must be used judiciously and with caution.

// Sample Honeypot Implementation 

// Creating a honeypot database
Database honeypot = new Database("honeypot");

// Configuring the honeypot to mimic a vulnerable system

// Isolating the honeypot from the rest of the network
honeypot.setFirewallRules("block all incoming traffic except for specific ports");

// Setting up monitoring and alerting for honeypot activity

What is YARN in Hadoop?

YARN stands for Yet Another Resource Negotiator. It is one of the key features of Hadoop 2.0 that helps in job scheduling and cluster resource management. YARN allows Hadoop to support more varied processing approaches and data access methods. YARN separates the job scheduling and resource management functionality previously handled by a single daemon, called JobTracker, in Hadoop 1.x. This separation improves cluster utilization and scalability. YARN enables Hadoop to run non-MapReduce distributed computing models, such as Spark and Tez, providing a more flexible and efficient way of processing Big Data.

// Sample code demonstrating YARN usage in Hadoop
// Connect to YARN cluster
Configuration conf = new Configuration();
conf.set("yarn.resourcemanager.address", "resource-manager:8032");
conf.set("yarn.resourcemanager.scheduler.address", "resource-manager:8030");
conf.set("yarn.resourcemanager.resource-tracker.address", "resource-manager:8031");
conf.set("fs.defaultFS", "hdfs://name-node:8020");
conf.set("", "yarn");

// Submit a job to YARN
Job job = Job.getInstance(conf, "myJob");
FileInputFormat.addInputPath(job, new Path("/input"));
FileOutputFormat.setOutputPath(job, new Path("/output"));
System.exit(job.waitForCompletion(true) ? 0 : 1);

Understanding Heartbeat in Hadoop

In Hadoop, Heartbeat is a signal message transmitted by nodes in the cluster to report their current status to the NameNode or ResourceManager. The heartbeat message includes information such as the node's health status, available disk space, and the number of currently running containers or tasks.

The purpose of the heartbeat is to enable the NameNode or ResourceManager to keep track of node availability and health status, and to make appropriate decisions about data replication, task allocation, and job scheduling. The nodes in the cluster send a heartbeat to the NameNode or ResourceManager every few seconds, and if the NameNode or ResourceManager doesn't receive a heartbeat from a node within a certain period of time, it assumes that the node is unavailable or has failed, and initiates the necessary recovery actions.

In summary, the heartbeat is a critical mechanism that enables the NameNode and ResourceManager to monitor and manage the cluster effectively, and it plays a crucial role in ensuring the robustness and reliability of the Hadoop ecosystem.

// Sample code showing how heartbeat is implemented in Hadoop

public void sendHeartbeat() {
  // get current status of the node
  HeartbeatResponse response = getNodeStatus();
  // send the heartbeat message to the NameNode or ResourceManager
  try {
    Connection conn = getConnectionToResourceMgr();
  } catch (IOException e) {
    // handle communication error

The Role of Distributed Cache in Apache Hadoop

In Apache Hadoop, the Distributed Cache plays a crucial role in enhancing the efficiency of MapReduce jobs. The Distributed Cache is used to cache files needed by MapReduce jobs on the nodes where tasks are being run. This results in reduced network traffic and faster execution speeds for MapReduce jobs.

When a MapReduce job is submitted to Apache Hadoop, the job configuration specifies the files that need to be cached using the Distributed Cache. These files could be the input data for the MapReduce job, as well as any utility files that are required for executing the Map or Reduce tasks.

The Distributed Cache stores these files in the local file systems of the nodes where the tasks are being run. This ensures that the files are easily accessible by the Map and Reduce tasks, without the need to transfer them over the network.

Overall, the Distributed Cache in Apache Hadoop plays a crucial role in improving the performance of MapReduce jobs by reducing network traffic and enabling faster access to required files.

Preparing for a Deloitte Interview

When preparing for a Deloitte interview, there are several tips to keep in mind:

  • Research the company and the position you are applying for.
  • Practice your responses to typical interview questions.
  • Dress appropriately and arrive on time.
  • Be confident and personable during the interview.
  • Ask questions about the company and the position.
// Sample code of how to handle an interview API can be added here

Frequently Asked Questions

1. Are Deloitte interviews difficult?

Where do you see yourself in 5 years?

This is a common interview question, and it's important to have a clear idea of where you want to be in the future. In five years, I hope to have gained enough experience and developed my skills to take on a leadership role within my company or industry. I also hope to have achieved some personal goals such as completing a certification or advancing my education.

Salary Expectations

Could you please let me know what salary range you have in mind for this position?

Starting Salary for Freshers at Deloitte

In general, the starting salary for freshers at Deloitte varies depending on the role and location. However, according to Glassdoor, the average base salary for a Deloitte analyst is $68,000 per year. It is important to note that this can vary based on factors such as education level, previous experience, and performance during the interview process. Additionally, Deloitte offers various benefits such as health insurance, 401(k) contributions, and paid time off.

Why Deloitte should hire me?

Deloitte should hire me because I possess the necessary skills, experience and a strong work ethic that would enable me to thrive in this organization. I have a proven track record of delivering exceptional results in my past roles, which are all aligned with the requirements of the position at Deloitte. I am highly motivated, analytical and have excellent communication skills, which would help me collaborate effectively with team members and clients. I am also passionate about continuously learning and developing my skills to stay up-to-date with the latest industry trends and best practices.

What are your strengths and weaknesses?

When it comes to strengths, I am adept at problem-solving and critical thinking. I am also an effective communicator and work well in a team environment. As for weaknesses, I sometimes struggle with time management and can become too focused on the details of a project. However, I am actively working on improving these areas and have developed strategies to overcome these challenges.

Coping with Stress and Tight Deadlines

When it comes to dealing with stress and tight deadlines, I have a few strategies that have worked well for me in the past. Firstly, I try to stay organized by making a to-do list and prioritizing my tasks based on their level of importance and urgency. This helps me to stay focused and avoid feeling overwhelmed.

Secondly, I make sure to take breaks throughout the day to rest and recharge. This might include taking a short walk outside, doing some deep breathing exercises, or simply taking a few moments to meditate or reflect.

Finally, I try to maintain a positive attitude and perspective, even when things get challenging. I remind myself that setbacks and obstacles are a normal part of any job, and that by staying resilient and determined, I can overcome them and achieve my goals.

In terms of meeting tight deadlines, I believe that careful planning and effective time management are key. By breaking down larger projects into smaller, manageable tasks, and setting clear deadlines for each one, I can ensure that I stay on track and meet my objectives on time.


#This is an explanation to handle stress and tight deadlines def handle_stress(): #Make a to-do list and prioritize tasks organize_tasks()

#Take breaks to rest and recharge take_breaks()

#Maintain a positive attitude stay_positive()

#This is an explanation to meet tight deadlines def meet_deadlines(): #Break down large projects into smaller tasks break_down_tasks()

#Set clear deadlines for each task set_deadlines()

#Effectively manage time manage_time()

Find the First Missing Positive Integer

This problem requires finding the smallest positive integer that is not present in the given array.

/** * Returns the smallest positive integer that is not present in the given array. * * @param nums An array of integers. * @return The smallest missing positive integer. */ public int findFirstMissingPositive(int[] nums) { // First, let's remove all negative numbers and zeros from the array int n = nums.length; for (int i = 0; i < n; i++) { if (nums[i] <= 0) { nums[i] = n + 1; } } // Update array indices to mark present integers for (int i = 0; i < n; i++) { int num = Math.abs(nums[i]); if (num <= n) { nums[num - 1] = -Math.abs(nums[num - 1]); } } // Find the smallest positive integer that is not present in the array for (int i = 0; i < n; i++) { if (nums[i] > 0) { return i + 1; } }

// If all numbers from 1 to n are present, the answer is n + 1 return n + 1; }

Anagrams: Solving the Hashing Problem


using namespace std;
#define int long long int

// function to calculate frequency of characters in a string
vector<int> freq(string s) {
    vector<int> ans(26, 0);
    for (int i = 0; i < s.size(); i++) {
        ans[s[i] - 'a']++;
    return ans;

signed main() {
    int t;
    cin >> t;
    while (t--) {
        string a, b;
        cin >> a >> b;
        // getting frequency vectors for both the strings
        vector<int> f1 = freq(a);
        vector<int> f2 = freq(b);
        bool flag = true;
        // comparing the frequency vectors
        for (int i = 0; i < 26; i++) {
            if (f1[i] != f2[i]) {
                flag = false;
        if (flag) {
            cout << "YES\n";
        } else {
            cout << "NO\n";
    return 0;

This code solves the anagram problem using hashing. It takes input the number of test cases t, two strings a and b for t test cases, and returns YES if the two strings are anagrams of each other, and NO if they are not.

The code uses a frequency vector to count the frequency of each character in the string. It then compares the two frequency vectors to check if they are equal. If they are equal, the two strings are anagrams of each other.

The time complexity of this code is O(t*n) where n is the length of the string. This code can be further optimized by using unordered maps to store the frequency of characters in the string.

Overall, this is an efficient and easy-to-understand solution to the anagram problem using hashing.

Coin Sum Infinite - Solving with Dynamic Programming

This program solves the problem of Coin Sum Infinite using Dynamic Programming. It acts like an API.


def coin_sum_infinite(coins, target_sum):
    MOD = 1000007
    dp = [0] * (target_sum + 1)
    dp[0] = 1
    # Iterate through all the coins
    for coin in coins:
        # Update dp array for all possible sums
        for i in range(coin, target_sum + 1):
            dp[i] = (dp[i] + dp[i - coin]) % MOD
    # Return the total number of combinations
    return dp[target_sum]

The coin_sum_infinite function takes in a list of coins and a target sum. It returns the total number of unique combinations of coins that can add up to the target sum.

This program takes advantage of Dynamic Programming to optimize the time complexity. It creates a dp array that stores the number of unique combinations for each sum starting from 0 up to the target sum. It then iterates through each coin and updates the dp array for all possible sums using the formula dp[i] = (dp[i] + dp[i - coin]) % MOD. Finally, it returns the total number of unique combinations for the target sum.

The MOD variable is used to handle integer overflow.

Find Nth Fibonacci Number

 * Calculates the nth fibonacci number using recursion
 * @param n the position of the desired fibonacci number
 * @return the nth fibonacci number
public static int calculateFibonacci(int n) {
    if (n == 0) {
        return 0;
    } else if (n == 1) {
        return 1;
    } else {
        return calculateFibonacci(n - 1) + calculateFibonacci(n - 2);

public static void main(String[] args) {
    int n = 10;
    System.out.println("The " + n + "th fibonacci number is " + calculateFibonacci(n));

This code calculates the nth fibonacci number using recursion. The fibonacci sequence starts with 0 and 1, and each subsequent number is the sum of the previous two numbers.

In the


method, the integer variable


is set to 10. This will calculate the 10th fibonacci number. The


method takes an integer parameter, which is the desired position of the fibonacci number to calculate. The method returns the corresponding fibonacci number.

Deloitte Interview Questions

API-like behavior

To view all interview questions for Deloitte, please follow the provided link: Deloitte Interview Questions.

Technical Interview Guides

Here are guides for technical interviews, categorized from introductory to advanced levels.

View All

Best MCQ

As part of their written examination, numerous tech companies necessitate candidates to complete multiple-choice questions (MCQs) assessing their technical aptitude.

View MCQ's
Made with love
This website uses cookies to make IQCode work for you. By using this site, you agree to our cookie policy

Welcome Back!

Sign up to unlock all of IQCode features:
  • Test your skills and track progress
  • Engage in comprehensive interactive courses
  • Commit to daily skill-enhancing challenges
  • Solve practical, real-world issues
  • Share your insights and learnings
Create an account
Sign in
Recover lost password
Or log in with

Create a Free Account

Sign up to unlock all of IQCode features:
  • Test your skills and track progress
  • Engage in comprehensive interactive courses
  • Commit to daily skill-enhancing challenges
  • Solve practical, real-world issues
  • Share your insights and learnings
Create an account
Sign up
Or sign up with
By signing up, you agree to the Terms and Conditions and Privacy Policy. You also agree to receive product-related marketing emails from IQCode, which you can unsubscribe from at any time.