Which of the following is LEAST likely to describe a task performed by someone in a testing role?
Evaluate test basis and test object
Create test completion report
Assess testability of test object
Define test environment requirements
In the context of software testing roles, tasks typically performed by someone in a testing role include evaluating the test basis and test object, creating test completion reports, and assessing the testability of the test object. These tasks are directly related to ensuring the quality and effectiveness of the software testing process.
Evaluate test basis and test object: This involves reviewing and analyzing the documents and artifacts that are used as the basis for testing. This is a fundamental task for testers to understand what needs to be tested and to ensure that the test objects are adequately covered by the test cases.
Create test completion report: This is part of the test closure activities. Testers are responsible for summarizing the testing activities, outcomes, and lessons learned, which are compiled into a test completion report. This report is crucial for stakeholders to understand the test results and make informed decisions.
Assess testability of test object: This task involves evaluating how easily a test object (such as a piece of software) can be tested. This includes considering aspects such as the availability of test data, the ability to isolate the object for testing, and the clarity of the requirements. Testers often perform this assessment to identify potential challenges and mitigate them before testing begins.
On the other hand:
Define test environment requirements: While testers may provide input on the test environment, the primary responsibility for defining and setting up the test environment usually falls to roles such as system administrators or infrastructure specialists. They ensure that the necessary hardware, software, and network configurations are in place for testing to proceed. This task is less likely to be the sole responsibility of a tester.
The four test levels used in ISTQB syllabus are:
1. Component (unit) testing
2. Integration testing
3. System testing
4. Acceptance testing
An organization wants to do away with integration testing but otherwise follow V-model. Which of the following statements is correct?
It is allowed as organizations can decide on men test levels to do depending on the context of the system under test
It is allowed because integration testing is not an important test level arc! can be dispensed with.
It is not allowed because integration testing is a very important test level and ignoring i: means definite poor product quality
It is not allowed as organizations can't change the test levels as these are chosen on the basis of the SDLC (software development life cycle) model
The V-model is a software development life cycle model that defines four test levels that correspond to four development phases: component (unit) testing with component design, integration testing with architectural design, system testing with system requirements, and acceptance testing with user requirements. The V-model emphasizes the importance of verifying and validating each phase of development with a corresponding level of testing, and ensuring that the test objectives, test basis, and test artifacts are aligned and consistent across the test levels. Therefore, an organization that wants to follow the V-model cannot do away with integration testing, as it would break the symmetry and completeness of the V-model, and compromise the quality and reliability of the software or system under test. Integration testing is a test level that aims to test the interactions and interfaces between components or subsystems, and to detect any defects or inconsistencies that may arise from the integration of different parts of the software or system. Integration testing is essential for ensuring the functionality, performance, and compatibility of the software or system as a whole, and for identifying and resolving any integration issues early in the development process. Skipping integration testing would increase the risk of finding serious defects later in the test process, or worse, in the production environment, which would be more costly and difficult to fix, and could damage the reputation and credibility of the organization. Therefore, the correct answer is D.
The other options are incorrect because:
A. It is not allowed as organizations can decide on the test levels to do depending on the context of the system under test. While it is true that the choice and scope of test levels may vary depending on the context of the system under test, such as the size, complexity, criticality, and risk level of the system, the organization cannot simply ignore or skip a test level that is defined and required by the chosen software development life cycle model. The organization must follow the principles and guidelines of the software development life cycle model, and ensure that the test levels are consistent and coherent with the development phases. If the organization wants to have more flexibility and adaptability in choosing the test levels, it should consider using a different software development life cycle model, such as an agile or iterative model, that allows for more dynamic and incremental testing approaches.
B. It is not allowed because integration testing is not an important test level and can be dispensed with. This statement is false and misleading, as integration testing is a very important test level that cannot be dispensed with. Integration testing is vital for testing the interactions and interfaces between components or subsystems, and for ensuring the functionality, performance, and compatibility of the software or system as a whole. Integration testing can reveal defects or inconsistencies that may not be detected by component (unit) testing alone, such as interface errors, data flow errors, integration logic errors, or performance degradation. Integration testing can also help to verify and validate the architectural design and the integration strategy of the software or system, and to ensure that the software or system meets the specified and expected quality attributes, such as reliability, usability, security, and maintainability. Integration testing can also provide feedback and confidence to the developers and stakeholders about the progress and quality of the software or system development. Therefore, integration testing is a crucial and indispensable test level that should not be skipped or omitted.
C. It is not allowed because integration testing is a very important test level and ignoring it means definite poor product quality. This statement is partially true, as integration testing is a very important test level that should not be ignored, and skipping it could result in poor product quality. However, this statement is too strong and absolute, as it implies that integration testing is the only factor that determines the product quality, and that ignoring it would guarantee a poor product quality. This is not necessarily the case, as there may be other factors that affect the product quality, such as the quality of the requirements, design, code, and other test levels, the effectiveness and efficiency of the test techniques and tools, the competence and experience of the developers and testers, the availability and adequacy of the resources and environment, the management and communication of the project, and the expectations and satisfaction of the customers and users. Therefore, while integration testing is a very important test level that should not be skipped, it is not the only test level that matters, and skipping it does not necessarily mean definite poor product quality, but rather a higher risk and likelihood of poor product quality.
References = ISTQB Certified Tester Foundation Level Syllabus, Version 4.0, 2018, Section 2.3, pages 16-18; ISTQB Glossary of Testing Terms, Version 4.0, 2018, pages 38-39; ISTQB CTFL 4.0 - Sample Exam - Answers, Version 1.1, 2023, Question 104, page 36.
Which of the following statements about exploratory testing is true?
Exploratory testing is an experience-based test technique in which testers explore the requirements specification to detect non testable requirements
When exploratory testing is conducted following a session-based approach, the issues detected by the testers can be documented in session sheets
Exploratory testing is an experience-based test technique used by testers during informal code reviews to find defects by exploring the source code
In exploratory testing, testers usually produce scripted tests and establish bidirectional traceability between these tests and the items of the test basis
Exploratory testing is an experience-based test technique in which testers dynamically design and execute tests based on their knowledge, intuition, and learning of the software system, without following predefined test scripts or test cases. Exploratory testing can be conducted following a session-based approach, which is a structured way of managing and measuring exploratory testing. In a session-based approach, the testers perform uninterrupted test sessions, usually lasting between 60 and 120 minutes, with a specific charter or goal, and document the issues detected, the test coverage achieved, and the time spent in session sheets. Session sheets are records of the test activities, results, and observations during a test session, which can be used for reporting, debriefing, and learning purposes. The other statements are false, because:
Exploratory testing is not a test technique in which testers explore the requirements specification to detect non testable requirements, but rather a test technique in which testers explore the software system to detect functional and non-functional defects, as well as to learn new information, risks, or opportunities. Non testable requirements are requirements that are ambiguous, incomplete, inconsistent, or not verifiable, which can affect the quality and effectiveness of the testing process. Non testable requirements can be detected by applying static testing techniques, such as reviews or inspections, to the requirements specification, before the software system is developed or tested.
Exploratory testing is not a test technique used by testers during informal code reviews to find defects by exploring the source code, but rather a test technique used by testers during dynamic testing to find defects by exploring the behavior and performance of the software system, without examining the source code. Informal code reviews are static testing techniques, in which the source code is analyzed by one or more reviewers, without following a formal process or using a checklist, to identify defects, violations, or improvements. Informal code reviews are usually performed by developers or peers, not by testers.
In exploratory testing, testers usually do not produce scripted tests and establish bidirectional traceability between these tests and the items of the test basis, but rather produce unscripted tests and adapt them based on the feedback and the findings of the testing process. Scripted tests are tests that are designed and documented in advance, with predefined inputs, outputs, and expected results, and are executed according to a test plan or a test procedure. Bidirectional traceability is the ability to trace both forward and backward the relationships between the items of the test basis, such as the requirements, the design, the risks, etc., and the test artifacts, such as the test cases, the test results, the defects, etc. Scripted tests and bidirectional traceability are usually associated with more formal and structured testing approaches, such as specification-based or structure-based test techniques, not with exploratory testing. References: ISTQB Certified Tester Foundation Level (CTFL) v4.0 sources and documents:
ISTQB® Certified Tester Foundation Level Syllabus v4.0, Chapter 2.2.3, Experience-based Test Design Techniques1
ISTQB® Glossary of Testing Terms v4.0, Exploratory Testing, Session-based Testing, Session Sheet, Non Testable Requirement, Static Testing, Informal Review, Dynamic Testing, Scripted Testing, Bidirectional Traceability2
For each test case to be executed, the following table specifies its dependencies and the required configuration of the test environment for running such test case:
Assume that CONF1 is the initial configuration of the test environment. Based on this assumption, which of the following is a test execution schedule that is compatible with the specified dependencies and allows minimizing the number of switches between the different configurations of the test environment?
TC4, TC3, TC2, TC1.TC5
TC1.TC5.TC4. TC3, TC2
TC4, TC3. TC2, TC5, TC1
TC4.TC1, TC5. TC2.TC3
To determine the correct execution order that minimizes the number of configuration switches and respects the dependencies, we need to consider the following:
Initial Configuration: CONF1.
Dependencies:
TC1 depends on nothing.
TC2 depends on TC4.
TC3 depends on TC4.
TC4 depends on nothing.
TC5 depends on TC1.
Configuration Requirements:
TC1 requires CONF2.
TC2 requires CONF2.
TC3 requires CONF1.
TC4 requires CONF1.
TC5 requires CONF2.
Given the initial configuration is CONF1, start with test cases that can run on CONF1 and respect the dependencies. Then switch to CONF2 only when necessary. The optimal order to minimize configuration switches is:
Start with TC4 (no dependencies, CONF1).
Continue with TC3 (depends on TC4, CONF1).
Switch to CONF2.
Execute TC2 (depends on TC4, CONF2).
Continue with TC1 (no dependencies, CONF2).
Finally, execute TC5 (depends on TC1, CONF2).
Therefore, the correct order is:
TC4 (CONF1)
TC3 (CONF1)
TC2 (CONF2)
TC1 (CONF2)
TC5 (CONF2)
Thus, the answer is A. TC4, TC3, TC2, TC1, TC5.
Copyright © 2021-2024 CertsTopics. All Rights Reserved