Q. Define Software Process.
A software process is the structured set of
activities, methods, and practices followed to develop, deliver, and maintain
software systems. It provides a disciplined approach to planning, building,
testing, and deploying a product. A software process defines what tasks
must be performed, who performs them, and how they should be
executed. Typical stages include requirement analysis, system design,
implementation, testing, deployment, and maintenance. The goal is to produce
high-quality software within time and budget constraints while minimizing
risks. Different models such as Waterfall, Agile, Spiral, and V-Model represent
different ways of organizing these activities. For example, an Agile software
process involves iterative development, continuous feedback, and frequent delivery
of working software increments.
Example: The Waterfall model defines stages like
requirement gathering → design → implementation → testing → deployment.
Q. Define Software Engineering.
Software Engineering is
the systematic and disciplined application of engineering principles to the
design, development, testing, deployment, and maintenance of software. It aims
to produce reliable, efficient, scalable, and cost-effective software systems.
Software engineering emphasizes structured methodologies, proper documentation,
quality assurance, management of complexity, and use of tools such as UML,
version control systems, and testing frameworks. It also involves applying
concepts like modularity, abstraction, reuse, and project management practices.
For example, developing a hospital management system requires requirements
analysis, architectural design, user interface design, coding, testing, and
maintenance. Software engineering ensures this entire process is organized,
predictable, and leads to high-quality results that satisfy user needs and
industry standards.
Example: Using UML diagrams, version control, and testing
frameworks while building a banking application.
Q. What is Software Crisis?
The software
crisis refers to the set of problems that arose in early software
development during the 1960s–70s when systems became more complex, but
development techniques were inadequate. Projects often ran over budget, were
delivered late, or failed to meet user expectations. Many systems were
unreliable or difficult to maintain due to poor design, insufficient testing,
and lack of proper project management. The crisis highlighted issues like low
productivity, poor documentation, difficulty in understanding large codebases,
and lack of standard methodologies. For
example, a government payroll system might fail because requirements
changed and the development team could not manage complexity. The software
crisis led to the development of formal software engineering practices, models,
and tools.
Q. What are system requirements?
Give examples.
System requirements
describe what a software system should accomplish and the constraints under
which it must operate. They guide developers, testers, and stakeholders
throughout the project. System requirements are divided into functional and
non-functional types.
·
Functional requirements define system behaviors
and features—e.g., “The system shall allow users to reset their password via
email verification.”
·
Non-functional requirements specify performance,
reliability, security, or usability constraints—e.g., “The website shall load
within 3 seconds under normal traffic.”
System requirements help
ensure that the final software meets user expectations, supports business goals,
and functions correctly in its intended environment. They are usually
documented in a Software Requirements Specification (SRS).
Q. List any two requirement
elicitation techniques.
Requirement elicitation is the process of gathering
requirements from stakeholders to understand what a system must accomplish. Two
widely used techniques are:
- Interviews: Developers or analysts
conduct one-on-one or group interviews with stakeholders to gather
detailed information about system needs. Interviews allow clarification of
complex requirements and discovery of hidden expectations. For example,
interviewing doctors to gather requirements for a hospital management
system.
- Questionnaires/Surveys: These are useful for
collecting data from a large number of users quickly. They consist of
structured questions that help identify user preferences, priorities, or
feature expectations. For example, sending surveys to customers to gather
requirements for an online banking application.
Q. What is a Use Case?
A use case describes how a user (actor)
interacts with a system to achieve a specific goal. It captures the functional
requirements of a system by outlining the sequence of steps between the user
and the system. Use cases help developers understand user needs, identify
system boundaries, and define required functionality. Each use case includes
actors, preconditions, main flow, alternative flows, and postconditions. For
example, in an e-commerce system, a use case titled “Place Order” describes how
a customer selects items, adds them to the cart, provides payment details, and
completes the purchase. Use cases serve as the foundation for design, testing,
and documentation and are often represented using UML diagrams.
Q. Define Putnam’s Resource
Allocation Model.
Putnam’s Resource Allocation Model, also known as
the SLIM (Software Life-Cycle Management) model, is a mathematical model
used for estimating software project effort, schedule, and staffing levels. It
uses the Rayleigh distribution curve to represent how manpower should be
applied over time. The model suggests that staffing levels start low, peak
during development, and decline toward the end of the project. It also reflects
the relationship between development effort, productivity, and time to deliver
a system. The model helps managers predict the optimal number of developers
needed at each phase, avoiding under- or over-staffing. For example, a large
banking software project can use Putnam’s model to estimate required effort and
schedule based on historical productivity data.
Q. What is Coupling? List any two
types of coupling.
Coupling refers to the degree of interdependence between
software modules. Lower coupling indicates that modules are independent and
interact through minimal, well-defined interfaces, making the system easier to
maintain and modify. Highly coupled modules depend heavily on each other, increasing
complexity, reducing flexibility, and making debugging difficult. In software
engineering, the goal is to achieve low coupling.
Two types of coupling include:
- Data
Coupling:
Modules communicate by passing simple data values. This is desirable and
represents low coupling.
- Control
Coupling: One
module controls the behavior of another by passing control information
such as flags. This is considered undesirable because it increases
dependency between modules.
Q. Define Cohesion. List any two
types of cohesion.
Cohesion refers to the degree to which elements within a
single module or component are related to each other and work together to
perform a single task. High cohesion is desirable because it improves
readability, maintainability, and reusability. A highly cohesive module has a
clear, focused purpose, while low cohesion indicates that a module performs
unrelated tasks, making the system harder to understand and maintain.
Two types of cohesion include:
- Functional
Cohesion: All
elements within the module contribute to a single well-defined task. This
is the highest form of cohesion.
- Sequential
Cohesion:
Output from one part of the module serves as input to another part,
forming a meaningful sequence of operations.
Q. Define Software Measurement.
Software measurement involves quantifying various
characteristics of software processes, products, or projects to evaluate
performance, quality, productivity, and progress. It provides objective data
for decision-making, planning, and process improvement. Software measurement
covers metrics such as Lines of Code (LOC), Function Points, defect density,
test coverage, code complexity, and reliability. These metrics help assess
project size, estimate cost and effort, evaluate maintainability, and track
quality throughout the development lifecycle. For example, measuring the number
of defects found during testing helps determine the stability of a system
before release. Effective software measurement allows organizations to compare
performance across projects, identify bottlenecks, and adopt best practices
based on quantitative analysis.
Q. Differentiate between a failure
and a fault.
A fault (also called a defect or bug) is an
error in the software’s code, design, or logic. It occurs due to mistakes made
by developers during implementation or design. Faults may remain hidden until
executed.
A failure occurs when the software does not
perform as expected during execution due to encountering a fault. Failures are
visible to the user, while faults exist internally in the system.
For example, an incorrect condition in an if-statement is a fault; if
this condition causes the system to crash or produce incorrect output, that
observable problem is the failure. Thus, faults are causes, and failures
are the symptoms triggered during execution.
Q. What is Path Testing?
Path testing is a white-box testing technique that aims to
ensure every possible execution path in a program is tested at least once. It
is based on program control flow and involves analyzing decisions, loops, and
branches within the code. The objective is to detect logical errors and ensure
that all conditional statements behave correctly. Testers create test cases to
cover independent paths derived from the control flow graph. For example, in a
login module with validation conditions such as “empty fields,” “wrong password,”
and “successful login,” path testing ensures that each possible route through
the module is tested. This improves code reliability and helps identify hidden
faults.
Q. Define Software Configuration
Management.
Software Configuration Management (SCM) is the discipline of
systematically controlling, tracking, and managing changes in software
throughout its lifecycle. SCM ensures that versions, builds, and configuration
items are properly maintained, documented, and traceable. It includes activities
such as version control, change management, build management, release
management, and status reporting. SCM tools like Git, SVN, and Jenkins help
teams manage collaborative development, avoid conflicting changes, and maintain
a stable development environment.
For example, SCM ensures that when
multiple developers work on different modules of a banking system, each change
is recorded, reviewed, and merged properly. Ultimately, SCM enhances project
stability, reduces errors due to unmanaged modifications, and supports
efficient maintenance.
Q. What is Spiral Model?
The Spiral Model is a risk-driven software
development model introduced by Barry Boehm. It combines elements of iterative
development with systematic risk analysis. The model consists of repeated
cycles called “spirals,” each containing four phases: planning, risk analysis,
engineering, and customer evaluation. After each cycle, the project is refined
and expanded. The focus on risk assessment helps detect potential failures
early, making it suitable for large, complex, or high-risk projects.
For example, in developing an air-traffic control
system, the Spiral Model ensures periodic evaluation of requirements, design
decisions, and risks, and allows stakeholders to review prototypes before
progressing. This reduces uncertainty and improves the system’s overall
reliability.
o) Define functional and
non-functional requirements with examples.
·
Functional requirements describe the specific behaviors, actions, and
services a system must provide. They define what the system should do. Example:
“The system shall allow users to register using email and password” or “The ATM
shall dispense cash when a valid PIN is entered.”
·
Non-functional requirements specify the quality attributes and constraints of
the system, such as performance, security, usability, and reliability. They
define how the system should behave. Example: “The website must load within 2
seconds,” or “The system shall support 500 concurrent users.”
Functional requirements shape system behavior,
while non-functional requirements ensure quality, performance, and user satisfaction.
Both types are documented in an SRS and guide design and testing activities.
Q. What is a Context Diagram?
A context diagram is a high-level graphical
representation of a system that shows the system as a single process and
illustrates how it interacts with external entities such as users, other
systems, or databases. It is part of the Data Flow Diagram (DFD) family and
helps define system boundaries by showing incoming and outgoing data flows.
Context diagrams do not include internal processes; instead, they focus on
external relationships.
For example, in a library management system, the context diagram shows
interactions with “Librarian,” “Student,” and “Book Database,” along with data
flows like book requests or payments. Context diagrams help stakeholders
clearly understand system scope and major data interactions before detailed
design begins.
Q. Define COCOMO Basic Model.
The Basic COCOMO (Constructive Cost Model)
is an early software cost estimation model developed by Barry Boehm. It
estimates the development effort in person-months using a formula based on the
size of the software measured in KLOC (thousands of lines of code). The model
uses the equation:
Effort = a × (KLOC)^b,
where constants a and b depend on the
project type (organic, semi-detached, or embedded).
The Basic COCOMO model helps predict project effort, duration, and staffing
needs in the early stages. For example, estimating the development effort for a
payroll system based on projected source code size. Although simplistic, it
provides a foundational understanding of software cost estimation.
Q. What is Object-Oriented
Design?
Object-Oriented Design (OOD) is a design methodology that
models a software system using objects, classes, attributes, and methods. It
focuses on organizing software around real-world entities and their
interactions. OOD uses principles like encapsulation, inheritance, polymorphism,
and abstraction to create modular, reusable, and maintainable systems.
Designers identify classes based on system requirements, define relationships
among them, and specify how objects interact to achieve system functionality.
For example, in an online shopping system,
classes such as Customer, Product, Order, and Payment encapsulate data and
behavior. OOD encourages separation of concerns and improves flexibility,
making systems easier to extend and maintain. UML diagrams such as class,
sequence, and use case diagrams support OOD.
Q. Define Halstead’s Program Length.
Halstead’s Program Length is a metric from
Halstead’s software science that measures the size of a program based on the
number of operators and operands. It is defined as:
Program Length (N) = N₁ + N₂,
where N₁ is the total number of operator
occurrences and N₂ is the total number of operand occurrences. The metric helps
evaluate program complexity, effort, and maintenance requirements. By analyzing
operator and operand usage, Halstead’s metrics can also estimate development
time, difficulty, and potential errors.
For example, if a code segment uses 30
operators and 70 operands, the program length is 100. Larger program lengths
may indicate higher cognitive load on developers and greater potential for
defects.
Q. What is Software Quality
Assurance (SQA)?
Software Quality Assurance (SQA) is a set of activities and
processes that ensure software meets required quality standards throughout its
development lifecycle. SQA involves establishing procedures, conducting
reviews, audits, verification, validation, continuous process monitoring, and
quality measurement. Its goal is to prevent defects rather than detect them
later. SQA ensures compliance with organizational, technical, and industry
standards such as ISO or CMMI.
Activities include code reviews, requirement
reviews, static analysis, test planning, and adherence to development
methodologies. For example, before delivering a healthcare management system,
SQA teams verify that design documents follow standards and development
activities meet quality benchmarks. SQA ultimately improves reliability,
maintainability, and user satisfaction.
Q. What is Decision Table
Testing?
Decision Table Testing is a black-box testing technique
used to test systems with complex business rules or multiple input
combinations. It organizes conditions and corresponding actions into a tabular
format, making it easier to visualize and derive test cases. Decision tables
help testers ensure that all possible combinations of inputs are considered,
especially when there are many rules or dependencies.
For example, in an insurance system, premium
calculation may depend on age, vehicle type, and driving record. A decision
table can list all combinations of these conditions and specify the resulting
premium category. This ensures thorough coverage and helps identify missing or
inconsistent rules in the system’s logic.
Q. Define Change Control in
configuration management.
Change Control is a key component of configuration management
that ensures changes to software artifacts are systematically proposed,
reviewed, approved, implemented, and documented. It prevents unauthorized or
unplanned modifications and maintains system integrity. The change control
process includes submitting a change request, analyzing impacts, approving or
rejecting the change, implementing it, and updating configuration records.
For example, if a new feature is
requested for an online banking system, change control ensures the request is
assessed for feasibility, security impact, and compatibility before
implementation. By following a controlled process, teams avoid introducing
unexpected defects or breaking existing functionality. Change control maintains
stability in multi-developer environments.
Q. What is a Data Dictionary?
A data dictionary is a centralized
repository that contains detailed descriptions of data elements used in a
software system. It includes information such as data types, formats, allowed
values, relationships, constraints, and meaning. The data dictionary helps
developers, analysts, and testers maintain consistency and avoid ambiguity when
dealing with data.
For example, in a student management system, the data dictionary may specify
that “Student_ID” is an integer of length 8, must be unique, and serves as a
primary key. It may also describe fields such as Name, Address, or GPA. Data
dictionaries support database design, documentation, integration, and
validation, ensuring consistent use of data throughout the project.
Q. State any two characteristics
of a good SRS.
A Software Requirements Specification (SRS)
must have several key characteristics to ensure clarity and usability. Two
important characteristics are:
- Complete: A good SRS includes all
necessary requirements, covering functional, non-functional, interface,
and performance aspects. No essential requirement should be missing.
- Unambiguous: Requirements must be stated
clearly so that they have only one interpretation. Each
stakeholder—including developers, testers, and clients—should understand
the same meaning.
For
example, instead
of saying “The system should respond quickly,” the SRS should specify “The
system shall load results within 2 seconds.” A complete and unambiguous SRS
minimizes misunderstandings, reduces rework, and guides accurate design and
testing.
Q. Differentiate between LOC and
Function Point size estimation.
Lines of Code (LOC) estimation measures software
size based on the number of lines written in a programming language. It depends
heavily on coding style, language used, and individual developer practices. For
example, C programs typically require more LOC than Python for the same
functionality. LOC is useful after coding begins but not ideal in early stages.
Function Point (FP) estimation measures software size based on the
functionality delivered to the user, independent of programming language. It
evaluates inputs, outputs, inquiries, files, and interfaces. FP can be done
early in the requirements phase and supports better cost and effort estimation.
FP focuses on functionality, while LOC focuses on code volume.
Q. What is Token Count in
software metrics?
In software metrics, token count refers to
the total number of operators and operands in a program, as
defined in Halstead’s software science. Tokens are the basic building blocks of
source code, including keywords, symbols, variable names, and constants. Token
count helps measure program complexity, effort, and maintainability.
For example, a simple expression like “sum = a +
b;” contains tokens such as identifiers (sum, a, b), operators (=, +), and
punctuation. Counting these tokens helps determine values like program length
(N), vocabulary, difficulty, and estimated development effort. Higher token
counts typically indicate more complex code that may require more cognitive
effort to understand and maintain.
Q. Define Software Reliability.
Software reliability refers to the probability that
software will operate correctly and continuously without failure under
specified conditions for a given period. It represents the trustworthiness and
stability of the software when used in real environments. Reliability depends
on factors such as defect density, fault tolerance, testing quality, code complexity,
and operational conditions.
For example, an online banking system must reliably
process transactions without errors even under heavy load. Reliability
engineering involves techniques like fault tree analysis, redundancy, rigorous
testing, and error handling. High reliability increases user satisfaction,
reduces maintenance costs, and enhances system dependability, especially for
critical applications like medical devices or aviation systems.
Q. What is Boundary Value
Analysis?
Boundary Value Analysis (BVA) is a black-box testing technique
that focuses on validating the system’s behavior at the edges of input ranges,
where errors occur most frequently. Instead of testing many random inputs, BVA
targets boundary values such as minimum, maximum, just inside, and just outside
limits.
For example, if a student’s valid age range
is 18–60, BVA test cases include 17, 18, 60, and 61. This technique is highly
effective because defects often arise due to incorrect handling of boundary
conditions, such as off-by-one errors. BVA helps ensure correct input
validation, improves test coverage, and reduces the number of required test
cases without compromising quality.
Q. What is Reverse Engineering?
Reverse engineering is the process of analyzing an
existing software system to understand its components, design, and
functionality when documentation is missing or outdated. It extracts
higher-level representations such as design diagrams, requirements, or
architecture from source code or executables. Reverse engineering helps
maintain legacy systems, migrate to new technologies, or recover lost knowledge
about old applications.
For example, a company may reverse
engineer a 20-year-old COBOL payroll system to understand business rules before
rewriting it in Java. It can also involve recovering database schemas,
generating UML diagrams, or identifying dependencies. Reverse engineering does
not modify the original software; instead, it enhances understanding,
supporting maintenance and modernization activities.
Q. Compare ISO 9001 and CMM
ISO 9001 is a generic quality management standard applicable to any industry. It focuses on documentation, process consistency, customer satisfaction, and continual improvement. Certification is done through external audits.
CMM (Capability Maturity Model) is software-specific and defines maturity levels (1–5) to improve process capability. It emphasizes engineering discipline, project management, and organizational maturity.
Key Differences:
-
ISO 9001: Broad, compliance-based, “what to do.”
-
CMM: Software-focused, improvement-based, “how to achieve maturity.”
Example: A software firm may hold ISO 9001 certification but aim for CMMI Level-5 to gain higher process reliability for government/defense projects.
Q. Explain Logarithmic Poisson Model (LPM) for Reliability
The Logarithmic Poisson Model (Musa’s model) predicts software reliability growth by assuming that software failures follow a Poisson distribution. It states that the failure intensity decreases exponentially as more failures are detected and corrected.
Formula: Failure intensity λ(μ) = λ₀ × e⁻ᵦμ
where μ is the cumulative number of detected failures, λ₀ initial failure rate, β failure reduction factor.
Meaning: Each time testers detect and fix a failure, the remaining failure intensity drops in a logarithmic pattern.
Example: A banking application that starts with 20 failures/day may reduce to 5 failures/day after systematic debugging and testing cycles following the model.
Q. Explain Significance of Software Quality Models & McCall’s Model
Software quality models help define, measure, and improve quality attributes. They provide structured parameters for evaluation and guide development and testing teams.
McCall’s Quality Model classifies quality into three categories:
-
Product Operation: Reliability, Efficiency, Correctness
-
Product Revision: Maintainability, Flexibility, Testability
-
Product Transition: Portability, Reusability, Interoperability
Example: For an ERP system, maintainability and testability are key under product revision. McCall’s model enables organizations to measure each attribute using quality metrics and ensure balanced quality development.
Q. Explain Cause-Effect Graphing in Software Testing?
Cause-Effect Graphing identifies logical relationships between input conditions (causes) and output actions (effects). It helps derive minimal and effective test cases using Boolean logic.
Example:
Causes:
C1 – User is authenticated
C2 – User has admin rights
Effects:
E1 – Allow dashboard access
E2 – Allow configuration access
Rules:
-
If C1 → E1
-
If C1 AND C2 → E2
The graph is converted to a decision table, generating efficient test cases. The method reduces redundant tests and ensures coverage of combinations.
Q. Explain Data Flow Testing
Data Flow Testing focuses on how variables are defined, used, and killed within the program. It finds anomalies such as “defined but not used,” “used before definition,” or “multiple definitions.”
Example:
It uses DU paths (Define-Use), DC paths (Define-Compute), and DD paths (Define-Define). Data Flow Testing is effective in detecting logical errors in business logic modules such as billing calculations or tax processing systems.
Q. Explain Debugging Process
Debugging is the process of locating, analyzing, and fixing defects discovered during testing.
Debugging Process:
-
Identify failure from test results.
-
Analyze error using logs, traces, breakpoints.
-
Locate defect by isolating faulty module or statement.
-
Fix defect in code.
-
Re-test to ensure fix correctness.
-
Perform regression testing to ensure no side effects.
Example: Using an IDE debugger to track incorrect GST calculation in an e-commerce app.
Q. Explain Unit Testing Strategies
Common strategies:
-
Black-Box Unit Testing: Focus on input–output behavior without seeing internal code. Example: Testing
validateOTP()for valid and invalid OTP inputs. -
White-Box Unit Testing: Uses code structure (branches/paths/loops). Example: Loop testing in a discount calculation function.
-
Stubs & Drivers: Used when dependent modules are missing. Example: A driver calls
generateBill()while stubs mimic payment service. -
Mocking: Replace external systems with mock objects (popular in microservices).
Q. Explain Regression Testing & Challenges
Regression Testing ensures new changes do not break existing features. It is essential during patches, updates, or integrations.
Challenges:
-
Large test suites take time and increase cost
-
Identifying which tests to re-run requires smart selection
-
Frequent UI changes cause test script maintenance issues
-
Tight delivery timelines reduce regression depth
Example: In a food delivery app, adding a wallet payment option must not affect past orders, cart features, or delivery tracking.
Q. Explain Testing Tools & Examples
Testing tools automate manual testing tasks, improve accuracy, speed up execution, and support continuous integration.
Common tools:
-
Selenium: Web automation testing
-
JUnit/TestNG: Unit testing for Java
-
JMeter: Performance testing
-
Postman: API testing
Other examples include Jenkins, Cypress, QTP/UFT, and SonarQube. These tools streamline large-scale enterprise application testing.
Q. What is CMM and Why Is It Important in Software Engineering?
The Capability Maturity Model (CMM) is a structured framework developed by the Software Engineering Institute (SEI) to assess and improve an organization’s software development processes. CMM defines five maturity levels—Initial, Repeatable, Defined, Managed, and Optimizing—each representing the sophistication and discipline of processes within a software organization.
Why CMM Is Important in Software Engineering
-
Improves Process Consistency
CMM ensures standardized workflows across projects, reducing variability and improving predictability. -
Enhances Product Quality
Mature processes lead to fewer defects, better reliability, and higher customer satisfaction. -
Reduces Project Risks
Defined processes prevent schedule overruns, cost escalation, and rework. -
Supports Scalability
Organizations with higher CMM levels can handle larger, more complex projects. -
Boosts Competitiveness
Many government and global clients prefer vendors with CMM/CMMI Level-3 or higher certification. -
Encourages Continuous Improvement
Higher maturity levels (4 and 5) focus on measurement, metrics, and process optimization.
Q. Requirement Elicitation Using the FAST Method
FAST (Facilitated Application Specification Technique) is a collaborative requirement elicitation method that brings users, developers, and analysts together in structured workshops. It reduces communication gaps and speeds up requirement clarification.
Steps:
-
Prepare agenda and stakeholders
-
Hold facilitated meetings to discuss needs
-
Brainstorm features and constraints
-
Resolve conflicts through group discussion
-
Document consensus requirements
Example:
For an Online Food Delivery System, participants identify: login, restaurant listing, order tracking, payment, delivery assignment. Through FAST, conflicting needs—such as delivery time display vs. privacy—are resolved. FAST ensures quick, accurate, and user-approved requirement gathering.
Q. Steps in the Software Maintenance Process
Software maintenance includes activities performed after deployment to enhance performance or fix issues.
Steps:
-
Identification: Users report issues or request enhancements.
-
Analysis: Determine change impact, feasibility, and cost.
-
Design: Create updated modules, data structures, or UIs.
-
Implementation: Apply code changes or enhancements.
-
Testing: Perform unit, integration, and regression testing.
-
Documentation Update: Modify user manuals, design docs.
-
Release & Review: Deploy changes and record results.
Example: Updating a mobile banking app’s authentication method from OTP to biometric requires analysis, redesign, coding, and regression testing.
Q. Function-Oriented Design (with Example & Diagram Description)
Function-Oriented Design focuses on decomposing the system into smaller functions that transform inputs into outputs. It emphasizes data flow, functional decomposition, and modularization.
Example: For a Library Management System, major functions include:
-
Issue Book
-
Return Book
-
Search Catalog
-
Update Inventory
These are decomposed further into subfunctions such as check availability, compute due date, calculate fine, etc.
Diagram Description:
A Data Flow Diagram (DFD) shows processes (circles), data stores (open rectangles), and data flow arrows. Example: “Issue Book” process receives input from “Member” and communicates with “Books Database”.
Q. UI Design Guidelines for Web-Based Applications
Good web UI design improves usability, accessibility, and user satisfaction.
Guidelines include:
-
Consistency: Uniform layout, colors, and navigation across pages
-
Clarity: Clear labels, simple forms, easy instructions
-
Responsive Design: Pages should adapt to mobile, tablet, desktop
-
Feedback Messages: Indicate success, errors, and loading status
-
Minimal Cognitive Load: Avoid clutter, use whitespace
-
Accessibility: Proper contrast, alt-text for images, keyboard navigation
-
Security Cues: Visible HTTPS, masked passwords
Example: An e-commerce checkout uses step-by-step forms, highlights errors instantly, and supports autofill.
Q. Purpose & Computation of Token Count in Software Metrics
Token Count is used in Halstead’s Software Metrics to measure program complexity based on operators and operands.
Purpose:
-
Quantify programming effort
-
Estimate development time
-
Compare code complexity
-
Predict maintainability
Computation:
Tokens include operators (+, if, return) and operands (variables, constants).
Halstead metrics use:
-
n₁ = number of distinct operators
-
n₂ = number of distinct operands
-
N₁ = total occurrences of operators
-
N₂ = total occurrences of operands
Program Length: N = N₁ + N₂
Vocabulary: n = n₁ + n₂
Example: In x = a + b;, tokens = {=,+,x,a,b}.
Q. Differences Between Hardware Reliability and Software Reliability
Hardware Reliability:
-
Fails due to physical deterioration, aging, wear and tear
-
Reliability improves through redundant components
-
Follows a “bathtub curve” failure rate
-
Example: Hard disk crash after prolonged use
Software Reliability:
-
Fails due to design and logical faults, not physical decay
-
Does not degrade with time but changes introduce bugs
-
Measured by MTBF, fault density
-
Example: App crashing due to unhandled exceptions
Hardware reliability is influenced by environmental factors, whereas software reliability depends on code quality, testing, and maintenance.
Q. System Testing and Its Objectives
System Testing evaluates the complete integrated system to ensure it meets functional and non-functional requirements. It is performed after integration testing and before acceptance testing.
Objectives:
-
Validate end-to-end functionality
-
Check performance, security, usability, and reliability
-
Verify data integrity across modules
-
Ensure system works under real-world constraints
-
Identify defects missed in earlier stages
Example: For a hospital management system, system testing verifies patient registration, billing, pharmacy, and lab modules working together under realistic load.
Q. What is SRS? Explain its Characteristics & IEEE Structure
A Software Requirements Specification (SRS) is a formal document that describes what a software system should do. It defines functional requirements, non-functional requirements, constraints, interfaces, and acceptance criteria. SRS acts as a contract between stakeholders and developers, ensuring clarity and reducing misunderstandings.
Characteristics of a Good SRS
-
Correct: Accurately represents user needs
-
Complete: Contains all requirements and constraints
-
Unambiguous: Only one interpretation
-
Verifiable: Requirements can be tested
-
Consistent: No conflicting statements
-
Modifiable: Easy to update
-
Traceable: Each requirement linked to its source
IEEE 830 SRS Structure
-
Introduction – Purpose, scope, definitions
-
Overall Description – Product perspective, constraints, assumptions
-
Specific Requirements – Functional, non-functional, interfaces
-
Appendices – Supporting information
-
Index / Glossary
Example: An SRS for an online food delivery app lists login features, payment requirements, performance constraints, and security rules.
No comments:
Post a Comment