An Intrusion Detection System (IDS) is generating alarms that a user account has over 100 failed login attempts per minute. A sniffer is placed on the network, and a variety of passwords for that user are noted. Which of the following is MOST likely occurring?
A dictionary attack
A Denial of Service (DoS) attack
A spoofing attack
A backdoor installation
A dictionary attack is a type of brute-force attack that attempts to guess a user’s password by trying a large number of possible words or phrases, often derived from a dictionary or a list of commonly used passwords. A dictionary attack can be detected by an Intrusion Detection System (IDS) if it generates a high number of failed login attempts per minute, as well as a variety of passwords for the same user. A sniffer can capture the network traffic and reveal the passwords being tried by the attacker34. References: 3: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6, page 6574: CISSP For Dummies, 7th Edition, Chapter 6, page 197.
Who must approve modifications to an organization's production infrastructure configuration?
Technical management
Change control board
System operations
System users
A change control board (CCB) is a group of stakeholders who are responsible for reviewing, approving, and monitoring changes to an organization’s production infrastructure configuration. A production infrastructure configuration is the set of hardware, software, network, and environmental components that support the operation of an information system. Changes to the production infrastructure configuration can affect the security, performance, availability, and functionality of the system. Therefore, changes must be carefully planned, tested, documented, and authorized before implementation. A CCB ensures that changes are aligned with the organization’s objectives, policies, and standards, and that changes do not introduce any adverse effects or risks to the system or the organization. A CCB is not the same as technical management, system operations, or system users, who may be involved in the change management process, but do not have the authority to approve changes.
Why is a system's criticality classification important in large organizations?
It provides for proper prioritization and scheduling of security and maintenance tasks.
It reduces critical system support workload and reduces the time required to apply patches.
It allows for clear systems status communications to executive management.
It provides for easier determination of ownership, reducing confusion as to the status of the asset.
A system’s criticality classification is important in large organizations because it provides for proper prioritization and scheduling of security and maintenance tasks. A system’s criticality classification is the level of importance or impact that a system has on the organization’s mission, objectives, operations, or functions. A system’s criticality classification may depend on factors such as the system’s availability, integrity, confidentiality, functionality, performance, or reliability. A system’s criticality classification helps the organization to allocate resources, implement controls, perform audits, apply patches, conduct backups, and respond to incidents according to the system’s priority and risk. A system’s criticality classification does not necessarily reduce critical system support workload or the time required to apply patches, as these may depend on other factors such as the system’s complexity, configuration, or vulnerability. A system’s criticality classification may allow for clear systems status communications to executive management, but this is not the primary reason for its importance. A system’s criticality classification may provide for easier determination of ownership, but this is not the main benefit of its importance.
Which of the following is considered best practice for preventing e-mail spoofing?
Spam filtering
Cryptographic signature
Uniform Resource Locator (URL) filtering
Reverse Domain Name Service (DNS) lookup
The best practice for preventing e-mail spoofing is to use cryptographic signatures. E-mail spoofing is a technique that involves forging the sender’s address or identity in an e-mail message, usually to trick the recipient into opening a malicious attachment, clicking on a phishing link, or disclosing sensitive information. Cryptographic signatures are digital signatures that are created by encrypting the e-mail message or a part of it with the sender’s private key, and attaching it to the e-mail message. Cryptographic signatures can be used to verify the authenticity and integrity of the sender and the message, and to prevent e-mail spoofing5 . References: 5: What is Email Spoofing? : How to Prevent Email Spoofing
Copyright provides protection for which of the following?
Ideas expressed in literary works
A particular expression of an idea
New and non-obvious inventions
Discoveries of natural phenomena
Copyright is a form of intellectual property that grants the author or creator of an original work the exclusive right to reproduce, distribute, perform, display, or license the work. Copyright does not protect ideas, concepts, facts, discoveries, or methods, but only the particular expression of an idea in a tangible medium, such as a book, a song, a painting, or a software program12. References: 1: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 3, page 2872: CISSP For Dummies, 7th Edition, Chapter 3, page 87.
The FIRST step in building a firewall is to
assign the roles and responsibilities of the firewall administrators.
define the intended audience who will read the firewall policy.
identify mechanisms to encourage compliance with the policy.
perform a risk analysis to identify issues to be addressed.
The first step in building a firewall is to perform a risk analysis to identify the assets and resources that need to be protected, the threats and vulnerabilities that exist, and the potential impact of a security breach. Based on the risk analysis, the firewall design can be tailored to meet the specific security requirements and objectives of the organization
Which of the following is the MOST important consideration when storing and processing Personally Identifiable Information (PII)?
Encrypt and hash all PII to avoid disclosure and tampering.
Store PII for no more than one year.
Avoid storing PII in a Cloud Service Provider.
Adherence to collection limitation laws and regulations.
The most important consideration when storing and processing PII is to adhere to the collection limitation laws and regulations that apply to the jurisdiction and context of the data processing. Collection limitation is a principle that states that PII should be collected only for a specific, legitimate, and lawful purpose, and only to the extent that is necessary for that purpose1. By following this principle, the data processor can minimize the amount of PII that is stored and processed, and reduce the risk of data breaches, misuse, or unauthorized access. Encrypting and hashing all PII, storing PII for no more than one year, and avoiding storing PII in a cloud service provider are also good practices for protecting PII, but they are not as important as adhering to the collection limitation laws and regulations. References: 1: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5, page 290.
Passive Infrared Sensors (PIR) used in a non-climate controlled environment should
reduce the detected object temperature in relation to the background temperature.
increase the detected object temperature in relation to the background temperature.
automatically compensate for variance in background temperature.
detect objects of a specific temperature independent of the background temperature.
Passive Infrared Sensors (PIR) are devices that detect motion by sensing the infrared radiation emitted by objects. In a non-climate controlled environment, the background temperature may vary due to weather, seasons, or other factors. This may affect the sensitivity and accuracy of the PIR sensors, as they may not be able to distinguish between the object and the background. Therefore, the PIR sensors should have a feature that automatically adjusts the threshold or baseline of the background temperature to avoid false alarms or missed detections.
A and B are incorrect because they are not feasible or desirable solutions. Reducing or increasing the detected object temperature in relation to the background temperature would require altering the physical properties of the object or the sensor, which may not be possible or practical. Moreover, this may also affect the performance or functionality of the object or the sensor.
D is incorrect because it is not realistic or reliable. Detecting objects of a specific temperature independent of the background temperature would require the PIR sensors to have a very high resolution and precision, which may not be available or affordable. Moreover, this may also limit the range and scope of the PIR sensors, as they may not be able to detect objects that have different temperatures or emit different amounts of infrared radiation.
The goal of software assurance in application development is to
enable the development of High Availability (HA) systems.
facilitate the creation of Trusted Computing Base (TCB) systems.
prevent the creation of vulnerable applications.
encourage the development of open source applications.
The goal of software assurance in application development is to prevent the creation of vulnerable applications. Software assurance is the process of ensuring that the software is designed, developed, and maintained in a secure, reliable, and trustworthy manner. Software assurance involves applying security principles, standards, and best practices throughout the software development life cycle, such as security requirements, design, coding, testing, deployment, and maintenance. Software assurance aims to prevent or reduce the introduction of vulnerabilities, defects, or errors in the software that could compromise its security, functionality, or quality . References: : Software Assurance : Software Assurance - OWASP Cheat Sheet Series
Which of the following is the best practice for testing a Business Continuity Plan (BCP)?
Test before the IT Audit
Test when environment changes
Test after installation of security patches
Test after implementation of system patches
The best practice for testing a Business Continuity Plan (BCP) is to test it when the environment changes, such as when there are new business processes, technologies, threats, or regulations. This ensures that the BCP is updated, relevant, and effective for the current situation. Testing the BCP before the IT audit, after installation of security patches, or after implementation of system patches are not the best practices, as they may not reflect the actual changes in the business environment or the potential disruptions that may occur. References: 5: Comprehensive Guide to Business Continuity Testing67: Maximizing Your BCP Testing Efforts: Best Practices8
Which of the following does Temporal Key Integrity Protocol (TKIP) support?
Multicast and broadcast messages
Coordination of IEEE 802.11 protocols
Wired Equivalent Privacy (WEP) systems
Synchronization of multiple devices
Temporal Key Integrity Protocol (TKIP) supports multicast and broadcast messages by using a group temporal key that is shared by all the devices in the same wireless network. This key is used to encrypt and decrypt the messages that are sent to multiple recipients at once. TKIP also supports unicast messages by using a pairwise temporal key that is unique for each device and session. TKIP does not support coordination of IEEE 802.11 protocols, as it is a protocol itself that was designed to replace WEP. TKIP is compatible with WEP systems, but it does not support them, as it provides more security features than WEP. TKIP does not support synchronization of multiple devices, as it does not provide any clock or time synchronization mechanism . References: 1: Temporal Key Integrity Protocol - Wikipedia 2: Wi-Fi Security: Should You Use WPA2-AES, WPA2-TKIP, or Both? - How-To Geek
Which of the following MUST be done when promoting a security awareness program to senior management?
Show the need for security; identify the message and the audience
Ensure that the security presentation is designed to be all-inclusive
Notify them that their compliance is mandatory
Explain how hackers have enhanced information security
The most important thing to do when promoting a security awareness program to senior management is to show the need for security; identify the message and the audience. This means that you should demonstrate how security awareness can benefit the organization, reduce risks, and align with the business goals. You should also tailor your message and your audience according to the specific security issues and challenges that your organization faces. Ensuring that the security presentation is designed to be all-inclusive, notifying them that their compliance is mandatory, or explaining how hackers have enhanced information security are not the most effective ways to promote a security awareness program, as they may not address the specific needs, interests, or concerns of senior management. References: 9: Seven Keys to Success for a More Mature Security Awareness Program1011: 6 Metrics to Track in Your Cybersecurity Awareness Training Campaign
Which layer of the Open Systems Interconnections (OSI) model implementation adds information concerning the logical connection between the sender and receiver?
Physical
Session
Transport
Data-Link
The Transport layer of the Open Systems Interconnection (OSI) model implementation adds information concerning the logical connection between the sender and receiver. The Transport layer is responsible for establishing, maintaining, and terminating the end-to-end communication between two hosts, as well as ensuring the reliability, integrity, and flow control of the data. The Transport layer uses protocols such as TCP and UDP to provide connection-oriented or connectionless services, and adds headers that contain information such as source and destination ports, sequence and acknowledgment numbers, and checksums . References: : CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5, page 499. : CISSP For Dummies, 7th Edition, Chapter 5, page 145.
Which of the following is a network intrusion detection technique?
Statistical anomaly
Perimeter intrusion
Port scanning
Network spoofing
Statistical anomaly is a network intrusion detection technique that compares the current network activity with a baseline of normal behavior, and detects any deviations that exceed a predefined threshold. Statistical anomaly can identify unknown or novel attacks, but it may also generate false positives if the baseline is not updated or the threshold is not set properly12. References: 1: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 7, page 7702: CISSP For Dummies, 7th Edition, Chapter 7, page 237.
In the area of disaster planning and recovery, what strategy entails the presentation of information about the plan?
Communication
Planning
Recovery
Escalation
Communication is the strategy that involves the presentation of information about the disaster recovery plan to the stakeholders, such as management, employees, customers, vendors, and regulators. Communication ensures that everyone is aware of their roles and responsibilities in the event of a disaster, and that the plan is updated and tested regularly12. References: 1: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 10, page 10192: CISSP For Dummies, 7th Edition, Chapter 10, page 343.
Which one of the following transmission media is MOST effective in preventing data interception?
Microwave
Twisted-pair
Fiber optic
Coaxial cable
Fiber optic is the most effective transmission media in preventing data interception, as it uses light signals to transmit data over thin glass or plastic fibers1. Fiber optic cables are immune to electromagnetic interference, which means that they cannot be tapped or eavesdropped by external devices or signals. Fiber optic cables also have a low attenuation rate, which means that they can transmit data over long distances without losing much signal strength or quality. Microwave, twisted-pair, and coaxial cable are less effective transmission media in preventing data interception, as they use electromagnetic waves or electrical signals to transmit data over metal wires or air2. These media are susceptible to interference, noise, or tapping, which can compromise the confidentiality or integrity of the data. References: 1: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 7, page 4062: CISSP For Dummies, 7th Edition, Chapter 4, page 85.
An advantage of link encryption in a communications network is that it
makes key management and distribution easier.
protects data from start to finish through the entire network.
improves the efficiency of the transmission.
encrypts all information, including headers and routing information.
An advantage of link encryption in a communications network is that it encrypts all information, including headers and routing information. Link encryption is a type of encryption that is applied at the data link layer of the OSI model, and encrypts the entire packet or frame as it travels from one node to another1. Link encryption can protect the confidentiality and integrity of the data, as well as the identity and location of the nodes. Link encryption does not make key management and distribution easier, as it requires each node to have a separate key for each link. Link encryption does not protect data from start to finish through the entire network, as it only encrypts the data while it is in transit, and decrypts it at each node. Link encryption does not improve the efficiency of the transmission, as it adds overhead and latency to the communication. References: 1: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 7, page 419.
The PRIMARY purpose of a security awareness program is to
ensure that everyone understands the organization's policies and procedures.
communicate that access to information will be granted on a need-to-know basis.
warn all users that access to all systems will be monitored on a daily basis.
comply with regulations related to data and information protection.
The primary purpose of a security awareness program is to ensure that everyone understands the organization’s policies and procedures related to information security. A security awareness program is a set of activities, materials, or events that aim to educate and inform the employees, contractors, partners, and customers of the organization about the security goals, principles, and practices of the organization1. A security awareness program can help to create a security culture, improve the security behavior, and reduce the human errors or risks. Communicating that access to information will be granted on a need-to-know basis, warning all users that access to all systems will be monitored on a daily basis, and complying with regulations related to data and information protection are not the primary purposes of a security awareness program, as they are more specific or secondary objectives that may be part of the program, but not the main goal. References: 1: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, page 28.
Which one of the following is the MOST important in designing a biometric access system if it is essential that no one other than authorized individuals are admitted?
False Acceptance Rate (FAR)
False Rejection Rate (FRR)
Crossover Error Rate (CER)
Rejection Error Rate
The most important factor in designing a biometric access system if it is essential that no one other than authorized individuals are admitted is the False Acceptance Rate (FAR). FAR is the probability that a biometric system will incorrectly accept an unauthorized user or reject an authorized user2. FAR is a measure of the security or accuracy of the biometric system, and it should be as low as possible to prevent unauthorized access. False Rejection Rate (FRR), Crossover Error Rate (CER), and Rejection Error Rate are not as important as FAR, as they are related to the usability or convenience of the biometric system, rather than the security. FRR is the probability that a biometric system will incorrectly reject an authorized user or accept an unauthorized user. CER is the point where FAR and FRR are equal, and it is used to compare the performance of different biometric systems. Rejection Error Rate is the probability that a biometric system will fail to capture or process a biometric sample. References: 2: CISSP For Dummies, 7th Edition, Chapter 4, page 95.
Which of the following is a security feature of Global Systems for Mobile Communications (GSM)?
It uses a Subscriber Identity Module (SIM) for authentication.
It uses encrypting techniques for all communications.
The radio spectrum is divided with multiple frequency carriers.
The signal is difficult to read as it provides end-to-end encryption.
A security feature of Global Systems for Mobile Communications (GSM) is that it uses a Subscriber Identity Module (SIM) for authentication. A SIM is a smart card that contains the subscriber’s identity, phone number, network information, and encryption keys. The SIM is inserted into the mobile device and communicates with the network to authenticate the subscriber and establish a secure connection. The SIM also stores the subscriber’s contacts, messages, and preferences. The SIM provides security by preventing unauthorized access to the subscriber’s account and data, and by allowing the subscriber to easily switch devices without losing their information12. References: 1: GSM - Security and Encryption32: Introduction to GSM security
The key benefits of a signed and encrypted e-mail include
confidentiality, authentication, and authorization.
confidentiality, non-repudiation, and authentication.
non-repudiation, authorization, and authentication.
non-repudiation, confidentiality, and authorization.
A signed and encrypted e-mail provides confidentiality by preventing unauthorized access to the message content, non-repudiation by verifying the identity and integrity of the sender, and authentication by ensuring that the message is from the claimed source. Authorization is not a benefit of a signed and encrypted e-mail, as it refers to the process of granting or denying access to resources based on predefined rules.
Which of the following is ensured when hashing files during chain of custody handling?
Availability
Accountability
Integrity
Non-repudiation
Hashing files during chain of custody handling ensures integrity, which means that the files have not been altered or tampered with during the collection, preservation, or analysis of digital evidence1. Hashing is a process of applying a mathematical function to a file to generate a unique value, called a hash or a digest, that represents the file’s content. By comparing the hash values of the original and the copied files, the integrity of the files can be verified. Availability, accountability, and non-repudiation are not ensured by hashing files during chain of custody handling, as they are related to different aspects of information security. References: 1: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 10, page 633.
Which one of the following is a threat related to the use of web-based client side input validation?
Users would be able to alter the input after validation has occurred
The web server would not be able to validate the input after transmission
The client system could receive invalid input from the web server
The web server would not be able to receive invalid input from the client
A threat related to the use of web-based client side input validation is that users would be able to alter the input after validation has occurred. Client side input validation is performed on the user’s browser using JavaScript or other scripting languages. It can provide a faster and more user-friendly feedback to the user, but it can also be easily bypassed or manipulated by an attacker who disables JavaScript, uses a web proxy, or modifies the source code of the web page. Therefore, client side input validation should not be relied upon as the sole or primary method of preventing malicious or malformed input from reaching the web server. Server side input validation is also necessary to ensure the security and integrity of the web application56. References: 5: Input Validation - OWASP Cheat Sheet Series76: Input Validation vulnerabilities and how to fix them
As one component of a physical security system, an Electronic Access Control (EAC) token is BEST known for its ability to
overcome the problems of key assignments.
monitor the opening of windows and doors.
trigger alarms when intruders are detected.
lock down a facility during an emergency.
An Electronic Access Control (EAC) token is best known for its ability to overcome the problems of key assignments in a physical security system. An EAC token is a device that can be used to authenticate a user or grant access to a physical area or resource, such as a door, a gate, or a locker2. An EAC token can be a smart card, a magnetic stripe card, a proximity card, a key fob, or a biometric device. An EAC token can overcome the problems of key assignments, which are the issues or challenges of managing and distributing physical keys to authorized users, such as lost, stolen, duplicated, or unreturned keys. An EAC token can provide more security, convenience, and flexibility than a physical key, as it can be easily activated, deactivated, or replaced, and it can also store additional information or perform other functions. Monitoring the opening of windows and doors, triggering alarms when intruders are detected, and locking down a facility during an emergency are not the abilities that an EAC token is best known for, as they are more related to the functions of other components of a physical security system, such as sensors, alarms, or locks. References: 2: CISSP For Dummies, 7th Edition, Chapter 9, page 253.
Which one of these risk factors would be the LEAST important consideration in choosing a building site for a new computer facility?
Vulnerability to crime
Adjacent buildings and businesses
Proximity to an airline flight path
Vulnerability to natural disasters
Proximity to an airline flight path is the least important consideration in choosing a building site for a new computer facility, as it poses the lowest risk factor compared to the other options. Proximity to an airline flight path may cause some noise or interference issues, but it is unlikely to result in a major disaster or damage to the computer facility, unless there is a rare case of a plane crash or a terrorist attack3. Vulnerability to crime, adjacent buildings and businesses, and vulnerability to natural disasters are more important considerations in choosing a building site for a new computer facility, as they can pose significant threats to the physical security, availability, and integrity of the facility and its assets. Vulnerability to crime can expose the facility to theft, vandalism, or sabotage. Adjacent buildings and businesses can affect the fire safety, power supply, or environmental conditions of the facility. Vulnerability to natural disasters can cause the facility to suffer from floods, earthquakes, storms, or fires. References: 3: Official (ISC)2 CISSP CBK Reference, 5th Edition, Chapter 10, page 543.
What is an effective practice when returning electronic storage media to third parties for repair?
Ensuring the media is not labeled in any way that indicates the organization's name.
Disassembling the media and removing parts that may contain sensitive datA.
Physically breaking parts of the media that may contain sensitive datA.
Establishing a contract with the third party regarding the secure handling of the mediA.
When returning electronic storage media to third parties for repair, it is important to ensure that the media does not contain any sensitive or confidential data that could be compromised or disclosed. One way to do this is to establish a contract with the third party that specifies the security requirements and obligations for the handling, storage, and disposal of the media. This contract should also include provisions for auditing, reporting, and remediation in case of any security incidents or breaches. The other options are not effective practices, as they either do not protect the data adequately (A), damage the media unnecessarily (B and C), or are not feasible or practical in some cases. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5, page 239; CISSP For Dummies, 7th Edition, Chapter 2, page 43.
Which of the following methods protects Personally Identifiable Information (PII) by use of a full replacement of the data element?
Transparent Database Encryption (TDE)
Column level database encryption
Volume encryption
Data tokenization
Data tokenization is a method of protecting PII by replacing the sensitive data element with a non-sensitive equivalent, called a token, that has no extrinsic or exploitable meaning or value1. The token is then mapped back to the original data element in a secure database. This way, the PII is not exposed in the data processing or storage, and only authorized parties can access the original data element. Data tokenization is different from encryption, which transforms the data element into a ciphertext that can be decrypted with a key. Data tokenization does not require a key, and the token cannot be reversed to reveal the original data element2. References: 1: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5, page 2812: CISSP For Dummies, 7th Edition, Chapter 10, page 289.
Which type of control recognizes that a transaction amount is excessive in accordance with corporate policy?
Detection
Prevention
Investigation
Correction
A detection control is a type of control that identifies and reports the occurrence of an unwanted event, such as a violation of a policy or a threshold. A detection control does not prevent or correct the event, but rather alerts the appropriate personnel or system to take action34. References: 3: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, page 294: CISSP For Dummies, 7th Edition, Chapter 1, page 21.
Which of the following is an attacker MOST likely to target to gain privileged access to a system?
Programs that write to system resources
Programs that write to user directories
Log files containing sensitive information
Log files containing system calls
An attacker is most likely to target programs that write to system resources to gain privileged access to a system. System resources are the hardware and software components that are essential for the operation and functionality of a system, such as the CPU, memory, disk, network, operating system, drivers, libraries, etc. Programs that write to system resources may have higher privileges or permissions than programs that write to user directories or log files. An attacker may exploit vulnerabilities or flaws in these programs to execute malicious code, escalate privileges, or bypass security controls. Programs that write to user directories or log files are less likely to be targeted by an attacker, as they may have lower privileges or permissions, and may not contain sensitive information or system calls. User directories are the folders or locations where users store their personal files or data. Log files are the records of events or activities that occur in a system or application.
A practice that permits the owner of a data object to grant other users access to that object would usually provide
Mandatory Access Control (MAC).
owner-administered control.
owner-dependent access control.
Discretionary Access Control (DAC).
A practice that permits the owner of a data object to grant other users access to that object would usually provide Discretionary Access Control (DAC). DAC is a type of access control that allows the data owner or creator to decide who can access or modify the data object, based on their identity or membership in a group. DAC is implemented using access control lists (ACLs), which specify the permissions or rights of each user or group for each data object. DAC is flexible and easy to implement, but it can also pose a security risk if the data owner grants excessive or inappropriate access to unauthorized or malicious users. Mandatory Access Control (MAC), owner-administered control, and owner-dependent access control are not types of access control that permit the owner of a data object to grant other users access to that object, as they are either based on predefined rules or policies, or not related to access control at all. References: : CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6, page 354.
The Hardware Abstraction Layer (HAL) is implemented in the
system software.
system hardware.
application software.
network hardware.
The Hardware Abstraction Layer (HAL) is implemented in the system software. The system software is the software that controls and manages the basic operations and functions of the computer system, such as the operating system, the device drivers, the firmware, and the BIOS. The HAL is a component of the system software that provides a common interface between the hardware and the software layers of the system. The HAL abstracts the details and differences of the hardware devices and components, and allows the software to interact with the hardware in a consistent and uniform way. The HAL also enables the system to support multiple hardware platforms and configurations without requiring changes in the software5 . References: 5: What is Hardware Abstraction Layer (HAL)? : Hardware Abstraction Layer (HAL) - GeeksforGeeks
Which of the following is the MAIN reason that system re-certification and re-accreditation are needed?
To assist data owners in making future sensitivity and criticality determinations
To assure the software development team that all security issues have been addressed
To verify that security protection remains acceptable to the organizational security policy
To help the security team accept or reject new systems for implementation and production
The main reason that system re-certification and re-accreditation are needed is to verify that the security protection of the system remains acceptable to the organizational security policy, especially after significant changes or updates to the system. Re-certification is the process of reviewing and testing the security controls of the system to ensure that they are still effective and compliant with the security policy. Re-accreditation is the process of authorizing the system to operate based on the results of the re-certification. The other options are not the main reason for system re-certification and re-accreditation, as they either do not relate to the security protection of the system (A and D), or do not involve re-certification and re-accreditation (B). References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 10, page 633; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 10, page 695.
In Disaster Recovery (DR) and business continuity training, which BEST describes a functional drill?
A full-scale simulation of an emergency and the subsequent response functions
A specific test by response teams of individual emergency response functions
A functional evacuation of personnel
An activation of the backup site
A functional drill is a type of disaster recovery and business continuity training that involves a specific test by response teams of individual emergency response functions, such as fire suppression, medical assistance, or data backup. A functional drill is designed to evaluate the performance, coordination, and effectiveness of the response teams and the emergency procedures. A functional drill is not the same as a full-scale simulation, a functional evacuation, or an activation of the backup site. A full-scale simulation is a type of disaster recovery and business continuity training that involves a realistic and comprehensive scenario of an emergency and the subsequent response functions, involving all the stakeholders, resources, and equipment. A functional evacuation is a type of disaster recovery and business continuity training that involves the orderly and safe movement of personnel from a threatened or affected area to a safe location. An activation of the backup site is a type of disaster recovery and business continuity action that involves the switching of operations from the primary site to the secondary site in the event of a disaster or disruption.
Which of the following is the FIRST step of a penetration test plan?
Analyzing a network diagram of the target network
Notifying the company's customers
Obtaining the approval of the company's management
Scheduling the penetration test during a period of least impact
The first step of a penetration test plan is to obtain the approval of the company’s management, as well as the consent of the target network’s owner or administrator. This is essential to ensure the legality, ethics, and scope of the test, as well as to define the objectives, expectations, and deliverables of the test. Without proper authorization, a penetration test could be considered as an unauthorized or malicious attack, and could result in legal or reputational consequences . References: : CISSP All-in-One Exam Guide, Eighth Edition, Chapter 7, page 758. : CISSP For Dummies, 7th Edition, Chapter 7, page 234.
Internet Protocol (IP) source address spoofing is used to defeat
address-based authentication.
Address Resolution Protocol (ARP).
Reverse Address Resolution Protocol (RARP).
Transmission Control Protocol (TCP) hijacking.
Internet Protocol (IP) source address spoofing is used to defeat address-based authentication, which is a method of verifying the identity of a user or a system based on their IP address. IP source address spoofing involves forging the IP header of a packet to make it appear as if it came from a trusted or authorized source, and bypassing the authentication check. IP source address spoofing can be used for various malicious purposes, such as denial-of-service attacks, man-in-the-middle attacks, or session hijacking34. References: 3: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5, page 5274: CISSP For Dummies, 7th Edition, Chapter 5, page 153.
Which of the following mobile code security models relies only on trust?
Code signing
Class authentication
Sandboxing
Type safety
Code signing is the mobile code security model that relies only on trust. Mobile code is a type of software that can be transferred from one system to another and executed without installation or compilation. Mobile code can be used for various purposes, such as web applications, applets, scripts, macros, etc. Mobile code can also pose various security risks, such as malicious code, unauthorized access, data leakage, etc. Mobile code security models are the techniques that are used to protect the systems and users from the threats of mobile code. Code signing is a mobile code security model that relies only on trust, which means that the security of the mobile code depends on the reputation and credibility of the code provider. Code signing works as follows:
Code signing relies only on trust because it does not enforce any security restrictions or controls on the mobile code, but rather leaves the decision to the code consumer. Code signing also does not guarantee the quality or functionality of the mobile code, but rather the authenticity and integrity of the code provider. Code signing can be effective if the code consumer knows and trusts the code provider, and if the code provider follows the security standards and best practices. However, code signing can also be ineffective if the code consumer is unaware or careless of the code provider, or if the code provider is compromised or malicious.
The other options are not mobile code security models that rely only on trust, but rather on other techniques that limit or isolate the mobile code. Class authentication is a mobile code security model that verifies the permissions and capabilities of the mobile code based on its class or type, and allows or denies the execution of the mobile code accordingly. Sandboxing is a mobile code security model that executes the mobile code in a separate and restricted environment, and prevents the mobile code from accessing or affecting the system resources or data. Type safety is a mobile code security model that checks the validity and consistency of the mobile code, and prevents the mobile code from performing illegal or unsafe operations.
Who in the organization is accountable for classification of data information assets?
Data owner
Data architect
Chief Information Security Officer (CISO)
Chief Information Officer (CIO)
The person in the organization who is accountable for the classification of data information assets is the data owner. The data owner is the person or entity that has the authority and responsibility for the creation, collection, processing, and disposal of a set of data. The data owner is also responsible for defining the purpose, value, and classification of the data, as well as the security requirements and controls for the data. The data owner should be able to determine the impact of the data on the mission of the organization, which means assessing the potential consequences of losing, compromising, or disclosing the data. The impact of the data on the mission of the organization is one of the main criteria for data classification, which helps to establish the appropriate level of protection and handling for the data. The data owner should also ensure that the data is properly labeled, stored, accessed, shared, and destroyed according to the data classification policy and procedures.
The other options are not the persons in the organization who are accountable for the classification of data information assets, but rather persons who have other roles or functions related to data management. The data architect is the person or entity that designs and models the structure, format, and relationships of the data, as well as the data standards, specifications, and lifecycle. The data architect supports the data owner by providing technical guidance and expertise on the data architecture and quality. The Chief Information Security Officer (CISO) is the person or entity that oversees the security strategy, policies, and programs of the organization, as well as the security performance and incidents. The CISO supports the data owner by providing security leadership and governance, as well as ensuring the compliance and alignment of the data security with the organizational objectives and regulations. The Chief Information Officer (CIO) is the person or entity that manages the information technology (IT) resources and services of the organization, as well as the IT strategy and innovation. The CIO supports the data owner by providing IT management and direction, as well as ensuring the availability, reliability, and scalability of the IT infrastructure and applications.
The use of private and public encryption keys is fundamental in the implementation of which of the following?
Diffie-Hellman algorithm
Secure Sockets Layer (SSL)
Advanced Encryption Standard (AES)
Message Digest 5 (MD5)
The use of private and public encryption keys is fundamental in the implementation of Secure Sockets Layer (SSL). SSL is a protocol that provides secure communication over the Internet by using public key cryptography and digital certificates. SSL works as follows:
The use of private and public encryption keys is fundamental in the implementation of SSL because it enables the authentication of the parties, the establishment of the shared secret key, and the protection of the data from eavesdropping, tampering, and replay attacks.
The other options are not protocols or algorithms that use private and public encryption keys in their implementation. Diffie-Hellman algorithm is a method for generating a shared secret key between two parties, but it does not use private and public encryption keys, but rather public and private parameters. Advanced Encryption Standard (AES) is a symmetric encryption algorithm that uses the same key for encryption and decryption, but it does not use private and public encryption keys, but rather a single secret key. Message Digest 5 (MD5) is a hash function that produces a fixed-length output from a variable-length input, but it does not use private and public encryption keys, but rather a one-way mathematical function.
Which technique can be used to make an encryption scheme more resistant to a known plaintext attack?
Hashing the data before encryption
Hashing the data after encryption
Compressing the data after encryption
Compressing the data before encryption
Compressing the data before encryption is a technique that can be used to make an encryption scheme more resistant to a known plaintext attack. A known plaintext attack is a type of cryptanalysis where the attacker has access to some pairs of plaintext and ciphertext encrypted with the same key, and tries to recover the key or decrypt other ciphertexts. A known plaintext attack can exploit the statistical properties or patterns of the plaintext or the ciphertext to reduce the search space or guess the key. Compressing the data before encryption can reduce the redundancy and increase the entropy of the plaintext, making it harder for the attacker to find any correlations or similarities between the plaintext and the ciphertext. Compressing the data before encryption can also reduce the size of the plaintext, making it more difficult for the attacker to obtain enough plaintext-ciphertext pairs for a successful attack.
The other options are not techniques that can be used to make an encryption scheme more resistant to a known plaintext attack, but rather techniques that can introduce other security issues or inefficiencies. Hashing the data before encryption is not a useful technique, as hashing is a one-way function that cannot be reversed, and the encrypted hash cannot be decrypted to recover the original data. Hashing the data after encryption is also not a useful technique, as hashing does not add any security to the encryption, and the hash can be easily computed by anyone who has access to the ciphertext. Compressing the data after encryption is not a recommended technique, as compression algorithms usually work better on uncompressed data, and compressing the ciphertext can introduce errors or vulnerabilities that can compromise the encryption.
Which security service is served by the process of encryption plaintext with the sender’s private key and decrypting cipher text with the sender’s public key?
Confidentiality
Integrity
Identification
Availability
The security service that is served by the process of encrypting plaintext with the sender’s private key and decrypting ciphertext with the sender’s public key is identification. Identification is the process of verifying the identity of a person or entity that claims to be who or what it is. Identification can be achieved by using public key cryptography and digital signatures, which are based on the process of encrypting plaintext with the sender’s private key and decrypting ciphertext with the sender’s public key. This process works as follows:
The process of encrypting plaintext with the sender’s private key and decrypting ciphertext with the sender’s public key serves identification because it ensures that only the sender can produce a valid ciphertext that can be decrypted by the receiver, and that the receiver can verify the sender’s identity by using the sender’s public key. This process also provides non-repudiation, which means that the sender cannot deny sending the message or the receiver cannot deny receiving the message, as the ciphertext serves as a proof of origin and delivery.
The other options are not the security services that are served by the process of encrypting plaintext with the sender’s private key and decrypting ciphertext with the sender’s public key. Confidentiality is the process of ensuring that the message is only readable by the intended parties, and it is achieved by encrypting plaintext with the receiver’s public key and decrypting ciphertext with the receiver’s private key. Integrity is the process of ensuring that the message is not modified or corrupted during transmission, and it is achieved by using hash functions and message authentication codes. Availability is the process of ensuring that the message is accessible and usable by the authorized parties, and it is achieved by using redundancy, backup, and recovery mechanisms.
Which component of the Security Content Automation Protocol (SCAP) specification contains the data required to estimate the severity of vulnerabilities identified automated vulnerability assessments?
Common Vulnerabilities and Exposures (CVE)
Common Vulnerability Scoring System (CVSS)
Asset Reporting Format (ARF)
Open Vulnerability and Assessment Language (OVAL)
The component of the Security Content Automation Protocol (SCAP) specification that contains the data required to estimate the severity of vulnerabilities identified by automated vulnerability assessments is the Common Vulnerability Scoring System (CVSS). CVSS is a framework that provides a standardized and objective way to measure and communicate the characteristics and impacts of vulnerabilities. CVSS consists of three metric groups: base, temporal, and environmental. The base metric group captures the intrinsic and fundamental properties of a vulnerability that are constant over time and across user environments. The temporal metric group captures the characteristics of a vulnerability that change over time, such as the availability and effectiveness of exploits, patches, and workarounds. The environmental metric group captures the characteristics of a vulnerability that are relevant and unique to a user’s environment, such as the configuration and importance of the affected system. Each metric group has a set of metrics that are assigned values based on the vulnerability’s attributes. The values are then combined using a formula to produce a numerical score that ranges from 0 to 10, where 0 means no impact and 10 means critical impact. The score can also be translated into a qualitative rating that ranges from none to low, medium, high, and critical. CVSS provides a consistent and comprehensive way to estimate the severity of vulnerabilities and prioritize their remediation.
The other options are not components of the SCAP specification that contain the data required to estimate the severity of vulnerabilities identified by automated vulnerability assessments, but rather components that serve other purposes. Common Vulnerabilities and Exposures (CVE) is a component that provides a standardized and unique identifier and description for each publicly known vulnerability. CVE facilitates the sharing and comparison of vulnerability information across different sources and tools. Asset Reporting Format (ARF) is a component that provides a standardized and extensible format for expressing the information about the assets and their characteristics, such as configuration, vulnerabilities, and compliance. ARF enables the aggregation and correlation of asset information from different sources and tools. Open Vulnerability and Assessment Language (OVAL) is a component that provides a standardized and expressive language for defining and testing the state of a system for the presence of vulnerabilities, configuration issues, patches, and other aspects. OVAL enables the automation and interoperability of vulnerability assessment and management.
What is the second phase of Public Key Infrastructure (PKI) key/certificate life-cycle management?
Implementation Phase
Initialization Phase
Cancellation Phase
Issued Phase
The second phase of Public Key Infrastructure (PKI) key/certificate life-cycle management is the initialization phase. PKI is a system that uses public key cryptography and digital certificates to provide authentication, confidentiality, integrity, and non-repudiation for electronic transactions. PKI key/certificate life-cycle management is the process of managing the creation, distribution, usage, storage, revocation, and expiration of keys and certificates in a PKI system. The key/certificate life-cycle management consists of six phases: pre-certification, initialization, certification, operational, suspension, and termination. The initialization phase is the second phase, where the key pair and the certificate request are generated by the end entity or the registration authority (RA). The initialization phase involves the following steps:
The other options are not the second phase of PKI key/certificate life-cycle management, but rather other phases. The implementation phase is not a phase of PKI key/certificate life-cycle management, but rather a phase of PKI system deployment, where the PKI components and policies are installed and configured. The cancellation phase is not a phase of PKI key/certificate life-cycle management, but rather a possible outcome of the termination phase, where the key pair and the certificate are permanently revoked and deleted. The issued phase is not a phase of PKI key/certificate life-cycle management, but rather a possible outcome of the certification phase, where the CA verifies and approves the certificate request and issues the certificate to the end entity or the RA.
What is the MOST important consideration from a data security perspective when an organization plans to relocate?
Ensure the fire prevention and detection systems are sufficient to protect personnel
Review the architectural plans to determine how many emergency exits are present
Conduct a gap analysis of a new facilities against existing security requirements
Revise the Disaster Recovery and Business Continuity (DR/BC) plan
When an organization plans to relocate, the most important consideration from a data security perspective is to conduct a gap analysis of the new facilities against the existing security requirements. A gap analysis is a process that identifies and evaluates the differences between the current state and the desired state of a system or a process. In this case, the gap analysis would compare the security controls and measures implemented in the old and new locations, and identify any gaps or weaknesses that need to be addressed. The gap analysis would also help to determine the costs and resources needed to implement the necessary security improvements in the new facilities.
The other options are not as important as conducting a gap analysis, as they do not directly address the data security risks associated with relocation. Ensuring the fire prevention and detection systems are sufficient to protect personnel is a safety issue, not a data security issue. Reviewing the architectural plans to determine how many emergency exits are present is also a safety issue, not a data security issue. Revising the Disaster Recovery and Business Continuity (DR/BC) plan is a good practice, but it is not a preventive measure, rather a reactive one. A DR/BC plan is a document that outlines how an organization will recover from a disaster and resume its normal operations. A DR/BC plan should be updated regularly, not only when relocating.
Intellectual property rights are PRIMARY concerned with which of the following?
Owner’s ability to realize financial gain
Owner’s ability to maintain copyright
Right of the owner to enjoy their creation
Right of the owner to control delivery method
Intellectual property rights are primarily concerned with the owner’s ability to realize financial gain from their creation. Intellectual property is a category of intangible assets that are the result of human creativity and innovation, such as inventions, designs, artworks, literature, music, software, etc. Intellectual property rights are the legal rights that grant the owner the exclusive control over the use, reproduction, distribution, and modification of their intellectual property. Intellectual property rights aim to protect the owner’s interests and incentives, and to reward them for their contribution to the society and economy.
The other options are not the primary concern of intellectual property rights, but rather the secondary or incidental benefits or aspects of them. The owner’s ability to maintain copyright is a means of enforcing intellectual property rights, but not the end goal of them. The right of the owner to enjoy their creation is a personal or moral right, but not a legal or economic one. The right of the owner to control the delivery method is a specific or technical aspect of intellectual property rights, but not a general or fundamental one.
A company whose Information Technology (IT) services are being delivered from a Tier 4 data center, is preparing a companywide Business Continuity Planning (BCP). Which of the following failures should the IT manager be concerned with?
Application
Storage
Power
Network
A company whose IT services are being delivered from a Tier 4 data center should be most concerned with application failures when preparing a companywide BCP. A BCP is a document that describes how an organization will continue its critical business functions in the event of a disruption or disaster. A BCP should include a risk assessment, a business impact analysis, a recovery strategy, and a testing and maintenance plan.
A Tier 4 data center is the highest level of data center classification, according to the Uptime Institute. A Tier 4 data center has the highest level of availability, reliability, and fault tolerance, as it has multiple and independent paths for power and cooling, and redundant and backup components for all systems. A Tier 4 data center has an uptime rating of 99.995%, which means it can only experience 0.4 hours of downtime per year. Therefore, the likelihood of a power, storage, or network failure in a Tier 4 data center is very low, and the impact of such a failure would be minimal, as the data center can quickly switch to alternative sources or routes.
However, a Tier 4 data center cannot prevent or mitigate application failures, which are caused by software bugs, configuration errors, or malicious attacks. Application failures can affect the functionality, performance, or security of the IT services, and cause data loss, corruption, or breach. Therefore, the IT manager should be most concerned with application failures when preparing a BCP, and ensure that the applications are properly designed, tested, updated, and monitored.
An important principle of defense in depth is that achieving information security requires a balanced focus on which PRIMARY elements?
Development, testing, and deployment
Prevention, detection, and remediation
People, technology, and operations
Certification, accreditation, and monitoring
An important principle of defense in depth is that achieving information security requires a balanced focus on the primary elements of people, technology, and operations. People are the users, administrators, managers, and other stakeholders who are involved in the security process. They need to be aware, trained, motivated, and accountable for their security roles and responsibilities. Technology is the hardware, software, network, and other tools that are used to implement the security controls and measures. They need to be selected, configured, updated, and monitored according to the security standards and best practices. Operations are the policies, procedures, processes, and activities that are performed to achieve the security objectives and requirements. They need to be documented, reviewed, audited, and improved continuously to ensure their effectiveness and efficiency.
The other options are not the primary elements of defense in depth, but rather the phases, functions, or outcomes of the security process. Development, testing, and deployment are the phases of the security life cycle, which describes how security is integrated into the system development process. Prevention, detection, and remediation are the functions of the security management, which describes how security is maintained and improved over time. Certification, accreditation, and monitoring are the outcomes of the security evaluation, which describes how security is assessed and verified against the criteria and standards.
When assessing an organization’s security policy according to standards established by the International Organization for Standardization (ISO) 27001 and 27002, when can management responsibilities be defined?
Only when assets are clearly defined
Only when standards are defined
Only when controls are put in place
Only procedures are defined
When assessing an organization’s security policy according to standards established by the ISO 27001 and 27002, management responsibilities can be defined only when standards are defined. Standards are the specific rules, guidelines, or procedures that support the implementation of the security policy. Standards define the minimum level of security that must be achieved by the organization, and provide the basis for measuring compliance and performance. Standards also assign roles and responsibilities to different levels of management and staff, and specify the reporting and escalation procedures.
Management responsibilities are the duties and obligations that managers have to ensure the effective and efficient execution of the security policy and standards. Management responsibilities include providing leadership, direction, support, and resources for the security program, establishing and communicating the security objectives and expectations, ensuring compliance with the legal and regulatory requirements, monitoring and reviewing the security performance and incidents, and initiating corrective and preventive actions when needed.
Management responsibilities cannot be defined without standards, as standards provide the framework and criteria for defining what managers need to do and how they need to do it. Management responsibilities also depend on the scope and complexity of the security policy and standards, which may vary depending on the size, nature, and context of the organization. Therefore, standards must be defined before management responsibilities can be defined.
The other options are not correct, as they are not prerequisites for defining management responsibilities. Assets are the resources that need to be protected by the security policy and standards, but they do not determine the management responsibilities. Controls are the measures that are implemented to reduce the security risks and achieve the security objectives, but they do not determine the management responsibilities. Procedures are the detailed instructions that describe how to perform the security tasks and activities, but they do not determine the management responsibilities.
Which of the following represents the GREATEST risk to data confidentiality?
Network redundancies are not implemented
Security awareness training is not completed
Backup tapes are generated unencrypted
Users have administrative privileges
Generating backup tapes unencrypted represents the greatest risk to data confidentiality, as it exposes the data to unauthorized access or disclosure if the tapes are lost, stolen, or intercepted. Backup tapes are often stored off-site or transported to remote locations, which increases the chances of them falling into the wrong hands. If the backup tapes are unencrypted, anyone who obtains them can read the data without any difficulty. Therefore, backup tapes should always be encrypted using strong algorithms and keys, and the keys should be protected and managed separately from the tapes.
The other options do not pose as much risk to data confidentiality as generating backup tapes unencrypted. Network redundancies are not implemented will affect the availability and reliability of the network, but not necessarily the confidentiality of the data. Security awareness training is not completed will increase the likelihood of human errors or negligence that could compromise the data, but not as directly as generating backup tapes unencrypted. Users have administrative privileges will grant users more access and control over the system and the data, but not as widely as generating backup tapes unencrypted.
Which of the following types of technologies would be the MOST cost-effective method to provide a reactive control for protecting personnel in public areas?
Install mantraps at the building entrances
Enclose the personnel entry area with polycarbonate plastic
Supply a duress alarm for personnel exposed to the public
Hire a guard to protect the public area
Supplying a duress alarm for personnel exposed to the public is the most cost-effective method to provide a reactive control for protecting personnel in public areas. A duress alarm is a device that allows a person to signal for help in case of an emergency, such as an attack, a robbery, or a medical condition. A duress alarm can be activated by pressing a button, pulling a cord, or speaking a code word. A duress alarm can alert security personnel, law enforcement, or other responders to the location and nature of the emergency, and initiate appropriate actions. A duress alarm is a reactive control because it responds to an incident after it has occurred, rather than preventing it from happening.
The other options are not as cost-effective as supplying a duress alarm, as they involve more expensive or complex technologies or resources. Installing mantraps at the building entrances is a preventive control that restricts the access of unauthorized persons to the facility, but it also requires more space, maintenance, and supervision. Enclosing the personnel entry area with polycarbonate plastic is a preventive control that protects the personnel from physical attacks, but it also reduces the visibility and ventilation of the area. Hiring a guard to protect the public area is a deterrent control that discourages potential attackers, but it also involves paying wages, benefits, and training costs.
All of the following items should be included in a Business Impact Analysis (BIA) questionnaire EXCEPT questions that
determine the risk of a business interruption occurring
determine the technological dependence of the business processes
Identify the operational impacts of a business interruption
Identify the financial impacts of a business interruption
A Business Impact Analysis (BIA) is a process that identifies and evaluates the potential effects of natural and man-made disasters on business operations. The BIA questionnaire is a tool that collects information from business process owners and stakeholders about the criticality, dependencies, recovery objectives, and resources of their processes. The BIA questionnaire should include questions that:
The BIA questionnaire should not include questions that determine the risk of a business interruption occurring, as this is part of the risk assessment process, which is a separate activity from the BIA. The risk assessment process identifies and analyzes the threats and vulnerabilities that could cause a business interruption, and estimates the likelihood and impact of such events. The risk assessment process also evaluates the existing controls and mitigation strategies, and recommends additional measures to reduce the risk to an acceptable level.
Which of the following actions will reduce risk to a laptop before traveling to a high risk area?
Examine the device for physical tampering
Implement more stringent baseline configurations
Purge or re-image the hard disk drive
Change access codes
Purging or re-imaging the hard disk drive of a laptop before traveling to a high risk area will reduce the risk of data compromise or theft in case the laptop is lost, stolen, or seized by unauthorized parties. Purging or re-imaging the hard disk drive will erase all the data and applications on the laptop, leaving only the operating system and the essential software. This will minimize the exposure of sensitive or confidential information that could be accessed by malicious actors. Purging or re-imaging the hard disk drive should be done using secure methods that prevent data recovery, such as overwriting, degaussing, or physical destruction.
The other options will not reduce the risk to the laptop as effectively as purging or re-imaging the hard disk drive. Examining the device for physical tampering will only detect if the laptop has been compromised after the fact, but will not prevent it from happening. Implementing more stringent baseline configurations will improve the security settings and policies of the laptop, but will not protect the data if the laptop is bypassed or breached. Changing access codes will make it harder for unauthorized users to log in to the laptop, but will not prevent them from accessing the data if they use other methods, such as booting from a removable media or removing the hard disk drive.
Which of the following BEST describes the responsibilities of a data owner?
Ensuring quality and validation through periodic audits for ongoing data integrity
Maintaining fundamental data availability, including data storage and archiving
Ensuring accessibility to appropriate users, maintaining appropriate levels of data security
Determining the impact the information has on the mission of the organization
The best description of the responsibilities of a data owner is determining the impact the information has on the mission of the organization. A data owner is a person or entity that has the authority and accountability for the creation, collection, processing, and disposal of a set of data. A data owner is also responsible for defining the purpose, value, and classification of the data, as well as the security requirements and controls for the data. A data owner should be able to determine the impact the information has on the mission of the organization, which means assessing the potential consequences of losing, compromising, or disclosing the data. The impact of the information on the mission of the organization is one of the main criteria for data classification, which helps to establish the appropriate level of protection and handling for the data.
The other options are not the best descriptions of the responsibilities of a data owner, but rather the responsibilities of other roles or functions related to data management. Ensuring quality and validation through periodic audits for ongoing data integrity is a responsibility of a data steward, who is a person or entity that oversees the quality, consistency, and usability of the data. Maintaining fundamental data availability, including data storage and archiving is a responsibility of a data custodian, who is a person or entity that implements and maintains the technical and physical security of the data. Ensuring accessibility to appropriate users, maintaining appropriate levels of data security is a responsibility of a data controller, who is a person or entity that determines the purposes and means of processing the data.
An organization has doubled in size due to a rapid market share increase. The size of the Information Technology (IT) staff has maintained pace with this growth. The organization hires several contractors whose onsite time is limited. The IT department has pushed its limits building servers and rolling out workstations and has a backlog of account management requests.
Which contract is BEST in offloading the task from the IT staff?
Platform as a Service (PaaS)
Identity as a Service (IDaaS)
Desktop as a Service (DaaS)
Software as a Service (SaaS)
Identity as a Service (IDaaS) is the best contract in offloading the task of account management from the IT staff. IDaaS is a cloud-based service that provides identity and access management (IAM) functions, such as user authentication, authorization, provisioning, deprovisioning, password management, single sign-on (SSO), and multifactor authentication (MFA). IDaaS can help the organization to streamline and automate the account management process, reduce the workload and costs of the IT staff, and improve the security and compliance of the user accounts. IDaaS can also support the contractors who have limited onsite time, as they can access the organization’s resources remotely and securely through the IDaaS provider.
The other options are not as effective as IDaaS in offloading the task of account management from the IT staff, as they do not provide IAM functions. Platform as a Service (PaaS) is a cloud-based service that provides a platform for developing, testing, and deploying applications, but it does not manage the user accounts for the applications. Desktop as a Service (DaaS) is a cloud-based service that provides virtual desktops for users to access applications and data, but it does not manage the user accounts for the virtual desktops. Software as a Service (SaaS) is a cloud-based service that provides software applications for users to use, but it does not manage the user accounts for the software applications.
Which of the following is MOST important when assigning ownership of an asset to a department?
The department should report to the business owner
Ownership of the asset should be periodically reviewed
Individual accountability should be ensured
All members should be trained on their responsibilities
When assigning ownership of an asset to a department, the most important factor is to ensure individual accountability for the asset. Individual accountability means that each person who has access to or uses the asset is responsible for its protection and proper handling. Individual accountability also implies that each person who causes or contributes to a security breach or incident involving the asset can be identified and held liable. Individual accountability can be achieved by implementing security controls such as authentication, authorization, auditing, and logging.
The other options are not as important as ensuring individual accountability, as they do not directly address the security risks associated with the asset. The department should report to the business owner is a management issue, not a security issue. Ownership of the asset should be periodically reviewed is a good practice, but it does not prevent misuse or abuse of the asset. All members should be trained on their responsibilities is a preventive measure, but it does not guarantee compliance or enforcement of the responsibilities.
Which one of the following affects the classification of data?
Assigned security label
Multilevel Security (MLS) architecture
Minimum query size
Passage of time
The passage of time is one of the factors that affects the classification of data. Data classification is the process of assigning a level of sensitivity or criticality to data based on its value, impact, and legal requirements. Data classification helps to determine the appropriate security controls and handling procedures for the data. However, data classification is not static, but dynamic, meaning that it can change over time depending on various factors. One of these factors is the passage of time, which can affect the relevance, usefulness, or sensitivity of the data. For example, data that is classified as confidential or secret at one point in time may become obsolete, outdated, or declassified at a later point in time, and thus require a lower level of protection. Conversely, data that is classified as public or unclassified at one point in time may become more valuable, sensitive, or regulated at a later point in time, and thus require a higher level of protection. Therefore, data classification should be reviewed and updated periodically to reflect the changes in the data over time.
The other options are not factors that affect the classification of data, but rather the outcomes or components of data classification. Assigned security label is the result of data classification, which indicates the level of sensitivity or criticality of the data. Multilevel Security (MLS) architecture is a system that supports data classification, which allows different levels of access to data based on the clearance and need-to-know of the users. Minimum query size is a parameter that can be used to enforce data classification, which limits the amount of data that can be retrieved or displayed at a time.
Which of the following is an effective control in preventing electronic cloning of Radio Frequency Identification (RFID) based access cards?
Personal Identity Verification (PIV)
Cardholder Unique Identifier (CHUID) authentication
Physical Access Control System (PACS) repeated attempt detection
Asymmetric Card Authentication Key (CAK) challenge-response
Asymmetric Card Authentication Key (CAK) challenge-response is an effective control in preventing electronic cloning of RFID based access cards. RFID based access cards are contactless cards that use radio frequency identification (RFID) technology to communicate with a reader and grant access to a physical or logical resource. RFID based access cards are vulnerable to electronic cloning, which is the process of copying the data and identity of a legitimate card to a counterfeit card, and using it to impersonate the original cardholder and gain unauthorized access. Asymmetric CAK challenge-response is a cryptographic technique that prevents electronic cloning by using public key cryptography and digital signatures to verify the authenticity and integrity of the card and the reader. Asymmetric CAK challenge-response works as follows:
Asymmetric CAK challenge-response prevents electronic cloning because the private keys of the card and the reader are never transmitted or exposed, and the signatures are unique and non-reusable for each transaction. Therefore, a cloned card cannot produce a valid signature without knowing the private key of the original card, and a rogue reader cannot impersonate a legitimate reader without knowing its private key.
The other options are not as effective as asymmetric CAK challenge-response in preventing electronic cloning of RFID based access cards. Personal Identity Verification (PIV) is a standard for federal employees and contractors to use smart cards for physical and logical access, but it does not specify the cryptographic technique for RFID based access cards. Cardholder Unique Identifier (CHUID) authentication is a technique that uses a unique number and a digital certificate to identify the card and the cardholder, but it does not prevent replay attacks or verify the reader’s identity. Physical Access Control System (PACS) repeated attempt detection is a technique that monitors and alerts on multiple failed or suspicious attempts to access a resource, but it does not prevent the cloning of the card or the impersonation of the reader.
In a data classification scheme, the data is owned by the
system security managers
business managers
Information Technology (IT) managers
end users
In a data classification scheme, the data is owned by the business managers. Business managers are the persons or entities that have the authority and accountability for the creation, collection, processing, and disposal of a set of data. Business managers are also responsible for defining the purpose, value, and classification of the data, as well as the security requirements and controls for the data. Business managers should be able to determine the impact the information has on the mission of the organization, which means assessing the potential consequences of losing, compromising, or disclosing the data. The impact of the information on the mission of the organization is one of the main criteria for data classification, which helps to establish the appropriate level of protection and handling for the data.
The other options are not the data owners in a data classification scheme, but rather the other roles or functions related to data management. System security managers are the persons or entities that oversee the security of the information systems and networks that store, process, and transmit the data. They are responsible for implementing and maintaining the technical and physical security of the data, as well as monitoring and auditing the security performance and incidents. Information Technology (IT) managers are the persons or entities that manage the IT resources and services that support the business processes and functions that use the data. They are responsible for ensuring the availability, reliability, and scalability of the IT infrastructure and applications, as well as providing technical support and guidance to the users and stakeholders. End users are the persons or entities that access and use the data for their legitimate purposes and needs. They are responsible for complying with the security policies and procedures for the data, as well as reporting any security issues or violations.
Which of the following is an initial consideration when developing an information security management system?
Identify the contractual security obligations that apply to the organizations
Understand the value of the information assets
Identify the level of residual risk that is tolerable to management
Identify relevant legislative and regulatory compliance requirements
When developing an information security management system (ISMS), an initial consideration is to understand the value of the information assets that the organization owns or processes. An information asset is any data, information, or knowledge that has value to the organization and supports its mission, objectives, and operations. Understanding the value of the information assets helps to determine the appropriate level of protection and investment for them, as well as the potential impact and consequences of losing, compromising, or disclosing them. Understanding the value of the information assets also helps to identify the stakeholders, owners, and custodians of the information assets, and their roles and responsibilities in the ISMS.
The other options are not initial considerations, but rather subsequent or concurrent considerations when developing an ISMS. Identifying the contractual security obligations that apply to the organizations is a consideration that depends on the nature, scope, and context of the information assets, as well as the relationships and agreements with the external parties. Identifying the level of residual risk that is tolerable to management is a consideration that depends on the risk appetite and tolerance of the organization, as well as the risk assessment and analysis of the information assets. Identifying relevant legislative and regulatory compliance requirements is a consideration that depends on the legal and ethical obligations and expectations of the organization, as well as the jurisdiction and industry of the information assets.
When implementing a data classification program, why is it important to avoid too much granularity?
The process will require too many resources
It will be difficult to apply to both hardware and software
It will be difficult to assign ownership to the data
The process will be perceived as having value
When implementing a data classification program, it is important to avoid too much granularity, because the process will require too many resources. Data classification is the process of assigning a level of sensitivity or criticality to data based on its value, impact, and legal requirements. Data classification helps to determine the appropriate security controls and handling procedures for the data. However, data classification is not a simple or straightforward process, as it involves many factors, such as the nature, context, and scope of the data, the stakeholders, the regulations, and the standards. If the data classification program has too many levels or categories of data, it will increase the complexity, cost, and time of the process, and reduce the efficiency and effectiveness of the data protection. Therefore, data classification should be done with a balance between granularity and simplicity, and follow the principle of proportionality, which means that the level of protection should be proportional to the level of risk.
The other options are not the main reasons to avoid too much granularity in data classification, but rather the potential challenges or benefits of data classification. It will be difficult to apply to both hardware and software is a challenge of data classification, as it requires consistent and compatible methods and tools for labeling and protecting data across different types of media and devices. It will be difficult to assign ownership to the data is a challenge of data classification, as it requires clear and accountable roles and responsibilities for the creation, collection, processing, and disposal of data. The process will be perceived as having value is a benefit of data classification, as it demonstrates the commitment and awareness of the organization to protect its data assets and comply with its obligations.
According to best practice, which of the following is required when implementing third party software in a production environment?
Scan the application for vulnerabilities
Contract the vendor for patching
Negotiate end user application training
Escrow a copy of the software
According to best practice, one of the requirements when implementing third party software in a production environment is to scan the application for vulnerabilities. Vulnerabilities are weaknesses or flaws in the software that can be exploited by attackers to compromise the security, functionality, or performance of the system or network. Scanning the application for vulnerabilities can help to identify and mitigate the potential risks, ensure the compliance with the security policies and standards, and prevent the introduction of malicious code or backdoors. Contracting the vendor for patching, negotiating end user application training, and escrowing a copy of the software are all possible requirements when implementing third party software in a production environment, but they are not the most essential or best practice requirement of doing so. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8, Software Development Security, page 1018. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 8, Software Development Security, page 1040.
Refer to the information below to answer the question.
An organization experiencing a negative financial impact is forced to reduce budgets and the number of Information Technology (IT) operations staff performing basic logical access security administration functions. Security processes have been tightly integrated into normal IT operations and are not separate and distinct roles.
Which of the following will MOST likely allow the organization to keep risk at an acceptable level?
Increasing the amount of audits performed by third parties
Removing privileged accounts from operational staff
Assigning privileged functions to appropriate staff
Separating the security function into distinct roles
The most likely action that will allow the organization to keep risk at an acceptable level is separating the security function into distinct roles. Separating the security function into distinct roles means to create and assign the specific and dedicated roles or positions for the security activities and initiatives, such as the security planning, the security implementation, the security monitoring, or the security auditing, and to separate them from the normal IT operations. Separating the security function into distinct roles can help to keep risk at an acceptable level, as it can enhance the security performance and effectiveness, by providing the authority, the resources, the guidance, and the accountability for the security roles, and by supporting the principle of least privilege and the separation of duties. Increasing the amount of audits performed by third parties, removing privileged accounts from operational staff, and assigning privileged functions to appropriate staff are not the most likely actions that will allow the organization to keep risk at an acceptable level, as they are related to the evaluation, the restriction, or the allocation of the security access or activity, not the separation of the security function into distinct roles. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, Security and Risk Management, page 32. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 1, Security and Risk Management, page 47.
What is the MAIN feature that onion routing networks offer?
Non-repudiation
Traceability
Anonymity
Resilience
The main feature that onion routing networks offer is anonymity. Anonymity is the state of being unknown or unidentifiable by hiding or masking the identity or the location of the sender or the receiver of a communication. Onion routing is a technique that enables anonymous communication over a network, such as the internet, by encrypting and routing the messages through multiple layers of intermediate nodes, called onion routers. Onion routing can protect the privacy and security of the users or the data, and can prevent censorship, surveillance, or tracking by third parties. Non-repudiation, traceability, and resilience are not the main features that onion routing networks offer, as they are related to the proof, tracking, or recovery of the communication, not the anonymity of the communication. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 4, Communication and Network Security, page 467. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 4, Communication and Network Security, page 483.
Refer to the information below to answer the question.
An organization experiencing a negative financial impact is forced to reduce budgets and the number of Information Technology (IT) operations staff performing basic logical access security administration functions. Security processes have been tightly integrated into normal IT operations and are not separate and distinct roles.
When determining appropriate resource allocation, which of the following is MOST important to monitor?
Number of system compromises
Number of audit findings
Number of staff reductions
Number of additional assets
The most important factor to monitor when determining appropriate resource allocation is the number of system compromises. The number of system compromises is the count or the frequency of the security incidents or breaches that affect the confidentiality, the integrity, or the availability of the system data or functionality, and that are caused by the unauthorized or the malicious access or activity. The number of system compromises can help to determine appropriate resource allocation, as it can indicate the level of security risk or threat that the system faces, and the level of security protection or improvement that the system needs. The number of system compromises can also help to evaluate the effectiveness or the efficiency of the current resource allocation, and to identify the areas or the domains that require more or less resources. Number of audit findings, number of staff reductions, and number of additional assets are not the most important factors to monitor when determining appropriate resource allocation, as they are related to the results or the outcomes of the audit process, the changes or the impacts of the staff size, or the additions or the expansions of the system resources, not the security incidents or breaches that affect the system data or functionality. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 7, Security Operations, page 863. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 7, Security Operations, page 879.
Which of the following is an example of two-factor authentication?
Retina scan and a palm print
Fingerprint and a smart card
Magnetic stripe card and an ID badge
Password and Completely Automated Public Turing test to tell Computers and Humans Apart (CAPTCHA)
An example of two-factor authentication is fingerprint and a smart card. Two-factor authentication is a type of authentication that requires two different factors or methods to verify the identity or the credentials of a user or a device. The factors or methods can be categorized into three types: something you know, something you have, or something you are. Something you know is a factor that relies on the knowledge of the user or the device, such as a password, a PIN, or a security question. Something you have is a factor that relies on the possession of the user or the device, such as a smart card, a token, or a certificate. Something you are is a factor that relies on the biometrics of the user or the device, such as a fingerprint, a retina scan, or a voice recognition. Fingerprint and a smart card are an example of two-factor authentication, as they combine two different factors: something you are and something you have. Retina scan and a palm print are not an example of two-factor authentication, as they are both the same factor: something you are. Magnetic stripe card and an ID badge are not an example of two-factor authentication, as they are both the same factor: something you have. Password and CAPTCHA are not an example of two-factor authentication, as they are both the same factor: something you know. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5, Identity and Access Management, page 685. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 5, Identity and Access Management, page 701.
Which of the following are required components for implementing software configuration management systems?
Audit control and signoff
User training and acceptance
Rollback and recovery processes
Regression testing and evaluation
The required components for implementing software configuration management systems are audit control and signoff, rollback and recovery processes, and regression testing and evaluation. Software configuration management systems are tools and techniques that enable the identification, control, tracking, and verification of the changes and versions of software products throughout the software development life cycle. Audit control and signoff are the mechanisms that ensure that the changes and versions of the software products are authorized, documented, reviewed, and approved by the appropriate stakeholders. Rollback and recovery processes are the procedures that enable the restoration of the previous state or version of the software products in case of a failure or error. Regression testing and evaluation are the methods that verify that the changes and versions of the software products do not introduce new defects or affect the existing functionality or performance. User training and acceptance are not required components for implementing software configuration management systems, as they are related to the deployment and operation of the software products, not the configuration management. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8, Software Development Security, page 1037. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 8, Software Development Security, page 1063.
A system is developed so that its business users can perform business functions but not user administration functions. Application administrators can perform administration functions but not user business functions. These capabilities are BEST described as
least privilege.
rule based access controls.
Mandatory Access Control (MAC).
separation of duties.
The capabilities of the system that allow its business users to perform business functions but not user administration functions, and its application administrators to perform administration functions but not user business functions, are best described as separation of duties. Separation of duties is a security principle that divides the roles and responsibilities of different tasks or functions among different individuals or groups, so that no one person or group has complete control or authority over a critical process or asset. Separation of duties can help to prevent fraud, collusion, abuse, or errors, and to ensure accountability, oversight, and checks and balances. Least privilege, rule based access controls, and Mandatory Access Control (MAC) are not the best descriptions of the capabilities of the system, as they do not reflect the division of roles and responsibilities among different users or groups. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, Security and Risk Management, page 32. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 1, Security and Risk Management, page 45.
During an audit, the auditor finds evidence of potentially illegal activity. Which of the following is the MOST appropriate action to take?
Immediately call the police
Work with the client to resolve the issue internally
Advise the person performing the illegal activity to cease and desist
Work with the client to report the activity to the appropriate authority
The most appropriate action to take when the auditor finds evidence of potentially illegal activity is to work with the client to report the activity to the appropriate authority. The auditor is a professional who performs an independent and objective examination of the system, the process, or the activity, to provide assurance, evaluation, or improvement. The auditor has a duty and a responsibility to report any evidence of potentially illegal activity that they find during the audit, as it can affect the security, the compliance, or the integrity of the system, the process, or the activity. The auditor should work with the client to report the activity to the appropriate authority, such as the law enforcement, the regulatory body, or the senior management, as it can ensure the cooperation, the communication, or the transparency between the auditor and the client, and it can follow the legal, the contractual, or the ethical obligations of the auditor and the client. The auditor should not immediately call the police, work with the client to resolve the issue internally, or advise the person performing the illegal activity to cease and desist, as they are not the most appropriate actions to take when the auditor finds evidence of potentially illegal activity, as they can bypass, undermine, or interfere with the cooperation, the communication, or the transparency between the auditor and the client, and they can violate the legal, the contractual, or the ethical obligations of the auditor and the client. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 7, Security Operations, page 894. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 7, Security Operations, page 910.
What is the PRIMARY reason for ethics awareness and related policy implementation?
It affects the workflow of an organization.
It affects the reputation of an organization.
It affects the retention rate of employees.
It affects the morale of the employees.
The primary reason for ethics awareness and related policy implementation is to affect the reputation of an organization positively, by demonstrating its commitment to ethical principles, values, and standards in its business practices, services, and products. Ethics awareness and policy implementation can also help the organization avoid legal liabilities, fines, or sanctions for unethical conduct, and foster trust and loyalty among its customers, partners, and employees. The other options are not as important as affecting the reputation, as they either do not directly relate to ethics (A), or are secondary outcomes of ethics (C and D). References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, page 19; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 1, page 28.
Which of the following methods provides the MOST protection for user credentials?
Forms-based authentication
Digest authentication
Basic authentication
Self-registration
The method that provides the most protection for user credentials is digest authentication. Digest authentication is a type of authentication that verifies the identity of a user or a device by using a cryptographic hash function to transform the user credentials, such as username and password, into a digest or a hash value, before sending them over a network, such as the internet. Digest authentication can provide more protection for user credentials than basic authentication, which sends the user credentials in plain text, or forms-based authentication, which relies on the security of the web server or the web application. Digest authentication can prevent the interception, disclosure, or modification of the user credentials by third parties, and can also prevent replay attacks by using a nonce or a random value. Self-registration is not a method of authentication, but a process of creating a user account or a profile by providing some personal information, such as name, email, or phone number. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5, Identity and Access Management, page 685. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 5, Identity and Access Management, page 701.
Refer to the information below to answer the question.
A new employee is given a laptop computer with full administrator access. This employee does not have a personal computer at home and has a child that uses the computer to send and receive e-mail, search the web, and use instant messaging. The organization’s Information Technology (IT) department discovers that a peer-to-peer program has been installed on the computer using the employee's access.
Which of the following documents explains the proper use of the organization's assets?
Human resources policy
Acceptable use policy
Code of ethics
Access control policy
The document that explains the proper use of the organization’s assets is the acceptable use policy. An acceptable use policy is a document that defines the rules and guidelines for the appropriate and responsible use of the organization’s information systems and resources, such as computers, networks, or devices. An acceptable use policy can help to prevent or reduce the misuse, abuse, or damage of the organization’s assets, and to protect the security, privacy, and reputation of the organization and its users. An acceptable use policy can also specify the consequences or penalties for violating the policy, such as disciplinary actions, termination, or legal actions. A human resources policy, a code of ethics, and an access control policy are not the documents that explain the proper use of the organization’s assets, as they are related to the management, values, or authorization of the organization’s employees or users, not the usage or responsibility of the organization’s information systems or resources. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, Security and Risk Management, page 47. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 1, Security and Risk Management, page 62.
Which of the following methods of suppressing a fire is environmentally friendly and the MOST appropriate for a data center?
Inert gas fire suppression system
Halon gas fire suppression system
Dry-pipe sprinklers
Wet-pipe sprinklers
The most environmentally friendly and appropriate method of suppressing a fire in a data center is to use an inert gas fire suppression system. An inert gas fire suppression system is a type of gaseous fire suppression system that uses an inert gas, such as nitrogen, argon, or carbon dioxide, to extinguish a fire. An inert gas fire suppression system works by displacing the oxygen in the area and reducing the oxygen concentration below the level that supports combustion. An inert gas fire suppression system is environmentally friendly, as it does not produce any harmful or toxic by-products, and it does not deplete the ozone layer. An inert gas fire suppression system is also appropriate for a data center, as it does not damage or affect the electronic equipment, and it does not pose any health risks to the personnel, as long as the oxygen level is maintained above the minimum requirement for human survival. Halon gas fire suppression system, dry-pipe sprinklers, and wet-pipe sprinklers are not the most environmentally friendly and appropriate methods of suppressing a fire in a data center, although they may be effective or common fire suppression techniques. Halon gas fire suppression system is a type of gaseous fire suppression system that uses halon, a chemical compound that contains bromine, to extinguish a fire. Halon gas fire suppression system works by interrupting the chemical reaction of the fire and inhibiting the combustion process. Halon gas fire suppression system is not environmentally friendly, as it produces harmful or toxic by-products, and it depletes the ozone layer. Halon gas fire suppression system is also not appropriate for a data center, as it poses health risks to the personnel, and it is banned or restricted in many countries. Dry-pipe sprinklers are a type of water-based fire suppression system that uses pressurized air or nitrogen to fill the pipes, and water to spray from the sprinkler heads when a fire is detected. Dry-pipe sprinklers are not environmentally friendly, as they use water, which is a scarce and valuable resource, and they may cause water pollution or contamination. Dry-pipe sprinklers are also not appropriate for a data center, as they may damage or affect the electronic equipment, and they may trigger false alarms or accidental discharges. Wet-pipe sprinklers are a type of water-based fire suppression system that uses pressurized water to fill the pipes and spray from the sprinkler heads when a fire is detected. Wet-pipe sprinklers are not environmentally friendly, as they use water, which is a scarce and valuable resource, and they may cause water pollution or contamination. Wet-pipe sprinklers are also not appropriate for a data center, as they may damage or affect the electronic equipment, and they may trigger false alarms or accidental discharges.
An organization adopts a new firewall hardening standard. How can the security professional verify that the technical staff correct implemented the new standard?
Perform a compliance review
Perform a penetration test
Train the technical staff
Survey the technical staff
A compliance review is a process of checking whether the systems and processes meet the established standards, policies, and regulations. A compliance review can help to verify that the technical staff has correctly implemented the new firewall hardening standard, as well as to identify and correct any deviations or violations. A penetration test, a training session, or a survey are not as effective as a compliance review, as they may not cover all the aspects of the firewall hardening standard or provide sufficient evidence of compliance. References: CISSP Exam Outline
What does electronic vaulting accomplish?
It protects critical files.
It ensures the fault tolerance of Redundant Array of Independent Disks (RAID) systems
It stripes all database records
It automates the Disaster Recovery Process (DRP)
Section: Security Operations
Which of the following is a responsibility of a data steward?
Ensure alignment of the data governance effort to the organization.
Conduct data governance interviews with the organization.
Document data governance requirements.
Ensure that data decisions and impacts are communicated to the organization.
A responsibility of a data steward is to ensure that data decisions and impacts are communicated to the organization. A data steward is a role or a function that is responsible for managing and maintaining the quality and the usability of the data within a specific data domain or a business area, such as finance, marketing, or human resources. A data steward can provide some benefits for data governance, which is the process of establishing and enforcing the policies and standards for the collection, use, storage, and protection of data, such as enhancing the accuracy and the reliability of the data, preventing or detecting errors or inconsistencies, and supporting the audit and the compliance activities. A data steward can perform various tasks or duties, such as:
Ensuring that data decisions and impacts are communicated to the organization is a responsibility of a data steward, as it can help to ensure the transparency and the accountability of the data governance process, as well as to facilitate the coordination and the cooperation of the data governance stakeholders, such as the data owners, the data custodians, the data users, and the data governance team. Ensuring alignment of the data governance effort to the organization, conducting data governance interviews with the organization, and documenting data governance requirements are not responsibilities of a data steward, although they may be related or possible tasks or duties. Ensuring alignment of the data governance effort to the organization is a responsibility of the data governance team, which is a group of experts or advisors who are responsible for defining and implementing the data governance policies and standards, as well as for overseeing and evaluating the data governance process and performance. Conducting data governance interviews with the organization is a task or a technique that can be used by the data governance team, the data steward, or the data auditor, to collect and analyze the information and the feedback about the data governance process and performance, from the data governance stakeholders, such as the data owners, the data custodians, the data users, or the data consumers. Documenting data governance requirements is a task or a technique that can be used by the data governance team, the data owner, or the data user, to specify and describe the needs and the expectations of the data governance process and performance, such as the data quality, the data security, or the data compliance.
After following the processes defined within the change management plan, a super user has upgraded a
device within an Information system.
What step would be taken to ensure that the upgrade did NOT affect the network security posture?
Conduct an Assessment and Authorization (A&A)
Conduct a security impact analysis
Review the results of the most recent vulnerability scan
Conduct a gap analysis with the baseline configuration
A security impact analysis is a process of assessing the potential effects of a change on the security posture of a system. It helps to identify and mitigate any security risks that may arise from the change, such as new vulnerabilities, configuration errors, or compliance issues. A security impact analysis should be conducted after following the change management plan and before implementing the change in the production environment. Conducting an A&A, reviewing the results of a vulnerability scan, or conducting a gap analysis with the baseline configuration are also possible steps to ensure the security of a system, but they are not specific to the change management process. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8: Software Development Security, page 961; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 8: Security Operations, page 1013.
A security compliance manager of a large enterprise wants to reduce the time it takes to perform network,
system, and application security compliance audits while increasing quality and effectiveness of the results.
What should be implemented to BEST achieve the desired results?
Configuration Management Database (CMDB)
Source code repository
Configuration Management Plan (CMP)
System performance monitoring application
A Configuration Management Database (CMDB) is a database that stores information about configuration items (CIs) for use in change, release, incident, service request, problem, and configuration management processes. A CI is any component or resource that is part of a system or a network, such as hardware, software, documentation, or personnel. A CMDB can provide some benefits for security compliance audits, such as:
A source code repository, a configuration management plan (CMP), and a system performance monitoring application are not the best options to achieve the desired results of reducing the time and increasing the quality and effectiveness of network, system, and application security compliance audits, although they may be related or useful tools or techniques. A source code repository is a database or a system that stores and manages the source code of a software or an application, and that supports version control, collaboration, and documentation of the code. A source code repository can provide some benefits for security compliance audits, such as:
However, a source code repository is not the best option to achieve the desired results of reducing the time and increasing the quality and effectiveness of network, system, and application security compliance audits, as it is only applicable to the application layer, and it does not provide information about the other CIs that are part of the system or the network, such as hardware, documentation, or personnel. A configuration management plan (CMP) is a document or a policy that defines and describes the objectives, scope, roles, responsibilities, processes, and procedures of configuration management, which is the process of identifying, controlling, tracking, and auditing the changes to the CIs. A CMP can provide some benefits for security compliance audits, such as:
However, a CMP is not the best option to achieve the desired results of reducing the time and increasing the quality and effectiveness of network, system, and application security compliance audits, as it is not a database or a system that stores and provides information about the CIs, but rather a document or a policy that defines and describes the configuration management process. A system performance monitoring application is a software or a tool that collects and analyzes data and metrics about the performance and the behavior of a system or a network, such as availability, reliability, throughput, response time, or resource utilization. A system performance monitoring application can provide some benefits for security compliance audits, such as:
However, a system performance monitoring application is not the best option to achieve the desired results of reducing the time and increasing the quality and effectiveness of network, system, and application security compliance audits, as it is only applicable to the network and system layers, and it does not provide information about the other CIs that are part of the system or the network, such as software, documentation, or personnel.
Which of the following mechanisms will BEST prevent a Cross-Site Request Forgery (CSRF) attack?
parameterized database queries
whitelist input values
synchronized session tokens
use strong ciphers
The best mechanism to prevent a Cross-Site Request Forgery (CSRF) attack is to use synchronized session tokens. A CSRF attack is a type of web application vulnerability that exploits the trust that a site has in a user’s browser. A CSRF attack occurs when a malicious site, email, or link tricks a user’s browser into sending a forged request to a vulnerable site, where the user is already authenticated. The vulnerable site cannot distinguish between the legitimate and the forged requests, and may perform an unwanted action on behalf of the user, such as changing a password, transferring funds, or deleting data. Synchronized session tokens are a technique to prevent CSRF attacks by adding a random and unique value to each request that is generated by the server and verified by the server before processing the request. The token is usually stored in a hidden form field or a custom HTTP header, and is tied to the user’s session. The token ensures that the request originates from the same site that issued it, and not from a malicious site. Synchronized session tokens are also known as CSRF tokens, anti-CSRF tokens, or state tokens. Parameterized database queries, whitelist input values, and use strong ciphers are not mechanisms to prevent CSRF attacks, although they may be useful for other types of web application vulnerabilities. Parameterized database queries are a technique to prevent SQL injection attacks by using placeholders or parameters for user input, instead of concatenating or embedding user input directly into the SQL query. Parameterized database queries ensure that the user input is treated as data and not as part of the SQL command. Whitelist input values are a technique to prevent input validation attacks by allowing only a predefined set of values or characters for user input, instead of rejecting or filtering out unwanted or malicious values or characters. Whitelist input values ensure that the user input conforms to the expected format and type. Use strong ciphers are a technique to prevent encryption attacks by using cryptographic algorithms and keys that are resistant to brute force, cryptanalysis, or other attacks. Use strong ciphers ensure that the encrypted data is confidential, authentic, and integral.
A vulnerability assessment report has been submitted to a client. The client indicates that one third of the hosts
that were in scope are missing from the report.
In which phase of the assessment was this error MOST likely made?
Enumeration
Reporting
Detection
Discovery
The discovery phase of a vulnerability assessment is the process of identifying and enumerating the hosts, services, and applications that are in scope of the assessment. This phase involves techniques such as network scanning, port scanning, service scanning, and banner grabbing. If one third of the hosts that were in scope are missing from the report, it means that the discovery phase was not performed properly or completely, and some hosts were not detected or included in the assessment. The enumeration, reporting, and detection phases are not likely to cause this error, as they depend on the results of the discovery phase. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 7: Security Assessment and Testing, page 857; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 6: Security Assessment and Testing, page 794.
Which of the following is the BEST way to reduce the impact of an externally sourced flood attack?
Have the service provider block the soiree address.
Have the soiree service provider block the address.
Block the source address at the firewall.
Block all inbound traffic until the flood ends.
The best way to reduce the impact of an externally sourced flood attack is to have the service provider block the source address. A flood attack is a type of denial-of-service attack that aims to overwhelm the target system or network with a large amount of traffic, such as SYN packets, ICMP packets, or UDP packets. An externally sourced flood attack is a flood attack that originates from outside the target’s network, such as from the internet. Having the service provider block the source address can help to reduce the impact of an externally sourced flood attack, as it can prevent the malicious traffic from reaching the target’s network, and thus conserve the network bandwidth and resources. Having the source service provider block the address, blocking the source address at the firewall, or blocking all inbound traffic until the flood ends are not the best ways to reduce the impact of an externally sourced flood attack, as they may not be feasible, effective, or efficient, respectively. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6: Communication and Network Security, page 745; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 4: Communication and Network Security, page 525.
Which of the following is MOST effective in detecting information hiding in Transmission Control Protocol/internet Protocol (TCP/IP) traffic?
Stateful inspection firewall
Application-level firewall
Content-filtering proxy
Packet-filter firewall
An application-level firewall is the most effective in detecting information hiding in TCP/IP traffic. Information hiding is a technique that conceals data or messages within other data or messages, such as using steganography, covert channels, or encryption. An application-level firewall is a type of firewall that operates at the application layer of the OSI model, and inspects the content and context of the network packets, such as the headers, payloads, or protocols. An application-level firewall can help to detect information hiding in TCP/IP traffic, as it can analyze the data for any anomalies, inconsistencies, or violations of the expected format or behavior. A stateful inspection firewall, a content-filtering proxy, and a packet-filter firewall are not as effective in detecting information hiding in TCP/IP traffic, as they operate at lower layers of the OSI model, and only inspect the state, content, or header of the network packets, respectively. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6: Communication and Network Security, page 731; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 4: Communication and Network Security, page 511.
What Is the FIRST step in establishing an information security program?
Establish an information security policy.
Identify factors affecting information security.
Establish baseline security controls.
Identify critical security infrastructure.
The first step in establishing an information security program is to establish an information security policy. An information security policy is a document that defines the objectives, scope, principles, and responsibilities of the information security program. An information security policy provides the foundation and direction for the information security program, as well as the basis for the development and implementation of the information security standards, procedures, and guidelines. An information security policy should be approved and supported by the senior management, and communicated and enforced across the organization. Identifying factors affecting information security, establishing baseline security controls, and identifying critical security infrastructure are not the first steps in establishing an information security program, but they may be part of the subsequent steps, such as the risk assessment, risk mitigation, or risk monitoring. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1: Security and Risk Management, page 22; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 1: Security and Risk Management, page 14.
Match the name of access control model with its associated restriction.
Drag each access control model to its appropriate restriction access on the right.
Transport Layer Security (TLS) provides which of the following capabilities for a remote access server?
Transport layer handshake compression
Application layer negotiation
Peer identity authentication
Digital certificate revocation
Transport Layer Security (TLS) provides peer identity authentication as one of its capabilities for a remote access server. TLS is a cryptographic protocol that provides secure communication over a network. It operates at the transport layer of the OSI model, between the application layer and the network layer. TLS uses asymmetric encryption to establish a secure session key between the client and the server, and then uses symmetric encryption to encrypt the data exchanged during the session. TLS also uses digital certificates to verify the identity of the client and the server, and to prevent impersonation or spoofing attacks. This process is known as peer identity authentication, and it ensures that the client and the server are communicating with the intended parties and not with an attacker. TLS also provides other capabilities for a remote access server, such as data integrity, confidentiality, and forward secrecy. References: Enable TLS 1.2 on servers - Configuration Manager; How to Secure Remote Desktop Connection with TLS 1.2. - Microsoft Q&A; Enable remote access from intranet with TLS/SSL certificate (Advanced …
What is the PRIMARY role of a scrum master in agile development?
To choose the primary development language
To choose the integrated development environment
To match the software requirements to the delivery plan
To project manage the software delivery
The primary role of a scrum master in agile development is to match the software requirements to the delivery plan. A scrum master is a facilitator who helps the development team and the product owner to collaborate and deliver the software product incrementally and iteratively, following the agile principles and practices. A scrum master is responsible for ensuring that the team follows the scrum framework, which includes defining the product backlog, planning the sprints, conducting the daily stand-ups, reviewing the deliverables, and reflecting on the process. A scrum master is not responsible for choosing the primary development language, the integrated development environment, or project managing the software delivery, although they may provide guidance and support to the team on these aspects. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8: Software Development Security, page 933; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 7: Software Development Security, page 855.
What is the second step in the identity and access provisioning lifecycle?
Provisioning
Review
Approval
Revocation
The identity and access provisioning lifecycle is the process of managing the creation, modification, and termination of user accounts and access rights in an organization. The second step in this lifecycle is approval, which means that the identity and access requests must be authorized by the appropriate managers or administrators before they are implemented. Approval ensures that the principle of least privilege is followed and that only authorized users have access to the required resources.
What does a Synchronous (SYN) flood attack do?
Forces Transmission Control Protocol /Internet Protocol (TCP/IP) connections into a reset state
Establishes many new Transmission Control Protocol / Internet Protocol (TCP/IP) connections
Empties the queue of pending Transmission Control Protocol /Internet Protocol (TCP/IP) requests
Exceeds the limits for new Transmission Control Protocol /Internet Protocol (TCP/IP) connections
A SYN flood attack does exceed the limits for new TCP/IP connections. A SYN flood attack is a type of denial-of-service attack that sends a large number of SYN packets to a server, without completing the TCP three-way handshake. The server allocates resources for each SYN packet and waits for the final ACK packet, which never arrives. This consumes the server’s memory and processing power, and prevents it from accepting new legitimate connections. The other options are not accurate descriptions of what a SYN flood attack does. References: SYN flood - Wikipedia; SYN flood DDoS attack | Cloudflare.
What are the steps of a risk assessment?
identification, analysis, evaluation
analysis, evaluation, mitigation
classification, identification, risk management
identification, evaluation, mitigation
The steps of a risk assessment are identification, analysis, and evaluation. Identification is the process of finding and listing the assets, threats, and vulnerabilities that are relevant to the risk assessment. Analysis is the process of estimating the likelihood and impact of each threat scenario and calculating the level of risk. Evaluation is the process of comparing the risk level with the risk criteria and determining whether the risk is acceptable or not. Mitigation is not part of the risk assessment, but it is part of the risk management, which is the process of applying controls to reduce or eliminate the risk. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1: Security and Risk Management, page 36; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 1: Security and Risk Management, page 28.
Which of the following is the MOST effective method to mitigate Cross-Site Scripting (XSS) attacks?
Use Software as a Service (SaaS)
Whitelist input validation
Require client certificates
Validate data output
The most effective method to mitigate Cross-Site Scripting (XSS) attacks is to use whitelist input validation. XSS attacks occur when an attacker injects malicious code, usually in the form of a script, into a web application that is then executed by the browser of an unsuspecting user. XSS attacks can compromise the confidentiality, integrity, and availability of the web application and the user’s data. Whitelist input validation is a technique that checks the user input against a predefined set of acceptable values or characters, and rejects any input that does not match the whitelist. Whitelist input validation can prevent XSS attacks by filtering out any malicious or unexpected input that may contain harmful scripts. Whitelist input validation should be applied at the point of entry of the user input, and should be combined with output encoding or sanitization to ensure that any input that is displayed back to the user is safe and harmless. Use Software as a Service (SaaS), require client certificates, and validate data output are not the most effective methods to mitigate XSS attacks, although they may be related or useful techniques. Use Software as a Service (SaaS) is a model that delivers software applications over the Internet, usually on a subscription or pay-per-use basis. SaaS can provide some benefits for web security, such as reducing the attack surface, outsourcing the maintenance and patching of the software, and leveraging the expertise and resources of the service provider. However, SaaS does not directly address the issue of XSS attacks, as the service provider may still have vulnerabilities or flaws in their web applications that can be exploited by XSS attackers. Require client certificates is a technique that uses digital certificates to authenticate the identity of the clients who access a web application. Client certificates are issued by a trusted certificate authority (CA), and contain the public key and other information of the client. Client certificates can provide some benefits for web security, such as enhancing the confidentiality and integrity of the communication, preventing unauthorized access, and enabling mutual authentication. However, client certificates do not directly address the issue of XSS attacks, as the client may still be vulnerable to XSS attacks if the web application does not properly validate and encode the user input. Validate data output is a technique that checks the data that is sent from the web application to the client browser, and ensures that it is correct, consistent, and safe. Validate data output can provide some benefits for web security, such as detecting and correcting any errors or anomalies in the data, preventing data leakage or corruption, and enhancing the quality and reliability of the web application. However, validate data output is not sufficient to prevent XSS attacks, as the data output may still contain malicious scripts that can be executed by the client browser. Validate data output should be complemented with output encoding or sanitization to ensure that any data output that is displayed to the user is safe and harmless.
A security professional determines that a number of outsourcing contracts inherited from a previous merger do not adhere to the current security requirements. Which of the following BEST minimizes the risk of this
happening again?
Define additional security controls directly after the merger
Include a procurement officer in the merger team
Verify all contracts before a merger occurs
Assign a compliancy officer to review the merger conditions
The best way to minimize the risk of inheriting outsourcing contracts that do not adhere to the current security requirements is to verify all contracts before a merger occurs. This means that the security professionals involved in the merger should review and assess the security clauses, standards, and practices of the contracts of the merging parties, and identify any gaps, conflicts, or inconsistencies with the current security requirements. This can help to ensure that the outsourcing contracts are aligned with the security objectives, policies, and regulations of the merged organization, and that the security risks and responsibilities are clearly defined and agreed upon by all parties. Verifying all contracts before a merger occurs can also help to avoid any legal, financial, or operational issues that may arise from non-compliance or breach of contract. Defining additional security controls directly after the merger, including a procurement officer in the merger team, and assigning a compliance officer to review the merger conditions are all possible ways to minimize the risk of inheriting outsourcing contracts that do not adhere to the current security requirements, but they are not as effective as verifying all contracts before a merger occurs. Defining additional security controls directly after the merger may be too late, as the outsourcing contracts may already be in effect and may not be easily modified or terminated. Including a procurement officer in the merger team may be helpful, as the procurement officer can oversee the contracting process and ensure that the security requirements are met, but the procurement officer may not have the technical expertise or authority to verify the security clauses, standards, and practices of the contracts. Assigning a compliance officer to review the merger conditions may be useful, as the compliance officer can monitor and audit the compliance status and performance of the outsourcing contracts, but the compliance officer may not have the power or influence to change or enforce the security requirements of the contracts.
“Stateful” differs from “Static” packet filtering firewalls by being aware of which of the following?
Difference between a new and an established connection
Originating network location
Difference between a malicious and a benign packet payload
Originating application session
Stateful firewalls differ from static packet filtering firewalls by being aware of the difference between a new and an established connection. A stateful firewall is a firewall that keeps track of the state of network connections and transactions, and uses this information to make filtering decisions. A stateful firewall maintains a state table that records the source and destination IP addresses, port numbers, protocols, and sequence numbers of each connection. A stateful firewall can distinguish between a new connection, which requires a three-way handshake to be completed, and an established connection, which has already completed the handshake and is ready to exchange data. A stateful firewall can also detect when a connection is terminated or idle, and remove it from the state table. A stateful firewall can provide more security and efficiency than a static packet filtering firewall, which only examines the header of each packet and compares it to a set of predefined rules. A static packet filtering firewall does not keep track of the state of connections, and cannot differentiate between new and established connections. A static packet filtering firewall may allow or block packets based on the source and destination IP addresses, port numbers, and protocols, but it cannot inspect the payload or the sequence numbers of the packets. A static packet filtering firewall may also be vulnerable to spoofing or flooding attacks, as it cannot verify the authenticity or validity of the packets. The other options are not aspects that stateful firewalls are aware of, but static packet filtering firewalls are not. Both types of firewalls can check the originating network location of the packets, but they cannot check the difference between a malicious and a benign packet payload, or the originating application session of the packets. References: Stateless vs Stateful Packet Filtering Firewalls - GeeksforGeeks; Stateful vs Stateless Firewall: Differences and Examples - Fortinet; Stateful Inspection Firewalls Explained - Palo Alto Networks.
Which of the following would an attacker BEST be able to accomplish through the use of Remote Access Tools (RAT)?
Reduce the probability of identification
Detect further compromise of the target
Destabilize the operation of the host
Maintain and expand control
Remote Access Tools (RAT) are malicious software that allow an attacker to remotely access and control a compromised host, often without the user’s knowledge or consent. RATs can be used to perform various malicious activities, such as stealing data, installing backdoors, executing commands, spying on the user, or spreading to other hosts. One of the main objectives of RATs is to maintain and expand control over the target network, by evading detection, hiding their presence, and creating persistence mechanisms.
Which of the following is the MOST important part of an awareness and training plan to prepare employees for emergency situations?
Having emergency contacts established for the general employee population to get information
Conducting business continuity and disaster recovery training for those who have a direct role in the recovery
Designing business continuity and disaster recovery training programs for different audiences
Publishing a corporate business continuity and disaster recovery plan on the corporate website
The most important part of an awareness and training plan to prepare employees for emergency situations is to design business continuity and disaster recovery training programs for different audiences. This means that the training content, format, frequency, and delivery methods should be tailored to the specific needs, roles, and responsibilities of the target audience, such as senior management, business unit managers, IT staff, recovery team members, or general employees. Different audiences may have different levels of awareness, knowledge, skills, and involvement in the business continuity and disaster recovery processes, and therefore require different types of training to ensure they are adequately prepared and informed. Designing business continuity and disaster recovery training programs for different audiences can help to increase the effectiveness, efficiency, and consistency of the training, as well as the engagement, motivation, and retention of the learners. Having emergency contacts established for the general employee population to get information, conducting business continuity and disaster recovery training for those who have a direct role in the recovery, and publishing a corporate business continuity and disaster recovery plan on the corporate website are all important parts of an awareness and training plan, but they are not as important as designing business continuity and disaster recovery training programs for different audiences. Having emergency contacts established for the general employee population to get information can help to provide timely and accurate communication and guidance during an emergency situation, but it does not necessarily prepare the employees for their roles and responsibilities before, during, and after the emergency. Conducting business continuity and disaster recovery training for those who have a direct role in the recovery can help to ensure that they are competent and confident to perform their tasks and duties in the event of a disruption, but it does not address the needs and expectations of other audiences who may also be affected by or involved in the business continuity and disaster recovery processes. Publishing a corporate business continuity and disaster recovery plan on the corporate website can help to make the plan accessible and transparent to the stakeholders, but it does not guarantee that the plan is understood, followed, or updated by the employees.
What can happen when an Intrusion Detection System (IDS) is installed inside a firewall-protected internal network?
The IDS can detect failed administrator logon attempts from servers.
The IDS can increase the number of packets to analyze.
The firewall can increase the number of packets to analyze.
The firewall can detect failed administrator login attempts from servers
An Intrusion Detection System (IDS) is a monitoring system that detects suspicious activities and generates alerts when they are detected. An IDS can be installed inside a firewall-protected internal network to monitor the traffic within the network and identify any potential threats or anomalies. One of the scenarios that an IDS can detect is failed administrator logon attempts from servers. This could indicate that an attacker has compromised a server and is trying to escalate privileges or access sensitive data. An IDS can alert the security team of such attempts and help them to investigate and respond to the incident. The other options are not valid consequences of installing an IDS inside a firewall-protected internal network. An IDS does not increase the number of packets to analyze, as it only passively observes the traffic that is already flowing in the network. An IDS does not affect the firewall’s functionality or performance, as it operates independently from the firewall. An IDS does not enable the firewall to detect failed administrator login attempts from servers, as the firewall is not designed to inspect the content or the behavior of the traffic, but only to filter it based on predefined rules. References: Intrusion Detection System (IDS) - GeeksforGeeks; Exploring Firewalls & Intrusion Detection Systems in Network Security ….
An Information Technology (IT) professional attends a cybersecurity seminar on current incident response methodologies.
What code of ethics canon is being observed?
Provide diligent and competent service to principals
Protect society, the commonwealth, and the infrastructure
Advance and protect the profession
Act honorable, honesty, justly, responsibly, and legally
Attending a cybersecurity seminar to learn about current incident response methodologies aligns with the ethical canon of advancing and protecting the profession. It involves enhancing one’s knowledge and skills, contributing to the growth and integrity of the field, and staying abreast of the latest developments and best practices in information security. References: ISC² Code of Ethics
Which of the following is a characteristic of an internal audit?
An internal audit is typically shorter in duration than an external audit.
The internal audit schedule is published to the organization well in advance.
The internal auditor reports to the Information Technology (IT) department
Management is responsible for reading and acting upon the internal audit results
A characteristic of an internal audit is that management is responsible for reading and acting upon the internal audit results. An internal audit is an independent and objective evaluation or assessment of the internal controls, processes, or activities of an organization, performed by a group of auditors or professionals who are part of the organization, such as the internal audit department or the audit committee. An internal audit can provide some benefits for security, such as enhancing the accuracy and the reliability of the operations, preventing or detecting fraud or errors, and supporting the audit and the compliance activities. An internal audit can involve various steps and roles, such as:
Management is responsible for reading and acting upon the internal audit results, as they are the primary users or recipients of the internal audit report, and they have the authority and the accountability to implement or execute the recommendations or the improvements suggested by the internal audit report, as well as to report or disclose the internal audit results to the external parties, such as the regulators, the shareholders, or the customers. An internal audit is typically shorter in duration than an external audit, the internal audit schedule is published to the organization well in advance, and the internal auditor reports to the audit committee are not characteristics of an internal audit, although they may be related or possible aspects of an internal audit. An internal audit is typically shorter in duration than an external audit, as it is performed by a group of auditors or professionals who are part of the organization, and who have more familiarity and access to the internal controls, processes, or activities of the organization, compared to a group of auditors or professionals who are outside the organization, and who have less familiarity and access to the internal controls, processes, or activities of the organization. However, an internal audit is typically shorter in duration than an external audit is not a characteristic of an internal audit, as it is not a defining or a distinguishing feature of an internal audit, and it may vary depending on the type or the nature of the internal audit, such as the objectives, scope, criteria, or methodology of the internal audit. The internal audit schedule is published to the organization well in advance, as it is a good practice or a technique that can help to ensure the transparency and the accountability of the internal audit, as well as to facilitate the coordination and the cooperation of the internal audit stakeholders, such as the management, the audit committee, the internal auditor, or the audit team.
The MAIN use of Layer 2 Tunneling Protocol (L2TP) is to tunnel data
through a firewall at the Session layer
through a firewall at the Transport layer
in the Point-to-Point Protocol (PPP)
in the Payload Compression Protocol (PCP)
The main use of Layer 2 Tunneling Protocol (L2TP) is to tunnel data in the Point-to-Point Protocol (PPP). L2TP is a tunneling protocol that operates at the data link layer (Layer 2) of the OSI model, and is used to support virtual private networks (VPNs) or as part of the delivery of services by ISPs. L2TP does not provide encryption or authentication by itself, but it can be combined with IPsec to provide security and confidentiality for the tunneled data. L2TP is commonly used to tunnel PPP sessions over an IP network, such as the Internet. PPP is a protocol that establishes a direct connection between two nodes, and provides authentication, encryption, and compression for the data transmitted over the connection. PPP is often used to connect a remote client to a corporate network, or a user to an ISP. By using L2TP to encapsulate PPP packets, the connection can be extended over a public or shared network, creating a VPN. This way, the user can access the network resources and services securely and transparently, as if they were directly connected to the network. The other options are not the main use of L2TP, as they involve different protocols or layers. L2TP does not tunnel data through a firewall, but rather over an IP network. L2TP does not operate at the session layer or the transport layer, but at the data link layer. L2TP does not use the Payload Compression Protocol (PCP), but rather the Point-to-Point Protocol (PPP). References: Layer 2 Tunneling Protocol - Wikipedia; What is the Layer 2 Tunneling Protocol (L2TP)? - NordVPN; Understanding VPN protocols: OpenVPN, L2TP, WireGuard & more.
Digital certificates used in Transport Layer Security (TLS) support which of the following?
Information input validation
Non-repudiation controls and data encryption
Multi-Factor Authentication (MFA)
Server identity and data confidentially
Digital certificates are electronic documents that contain the public key of an entity and are signed by a trusted third party, called a Certificate Authority (CA). Digital certificates are used in Transport Layer Security (TLS), a protocol that provides secure communication over the Internet, by enabling the following functions:
Which of the following is the MOST common method of memory protection?
Compartmentalization
Segmentation
Error correction
Virtual Local Area Network (VLAN) tagging
The most common method of memory protection is segmentation. Segmentation is a technique that divides the memory space into logical segments, such as code, data, stack, and heap. Each segment has its own attributes, such as size, location, access rights, and protection level. Segmentation can help to isolate and protect the memory segments from unauthorized or unintended access, modification, or execution, as well as to prevent memory corruption, overflow, or leakage. Compartmentalization, error correction, and VLAN tagging are not methods of memory protection, but of information protection, data protection, and network protection, respectively. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5: Security Engineering, page 589; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 3: Security Architecture and Engineering, page 370.
Which of the following is the BEST Identity-as-a-Service (IDaaS) solution for validating users?
Single Sign-On (SSO)
Security Assertion Markup Language (SAML)
Lightweight Directory Access Protocol (LDAP)
Open Authentication (OAuth)
The best Identity-as-a-Service (IDaaS) solution for validating users is Security Assertion Markup Language (SAML). IDaaS is a cloud-based service that provides identity and access management functions, such as authentication, authorization, and provisioning, to the customers. SAML is a standard protocol that enables the exchange of authentication and authorization information between different parties, such as the identity provider, the service provider, and the user. SAML can help to validate users in an IDaaS solution, as it can allow the users to access multiple cloud services with a single sign-on, and provide the service providers with the necessary identity and attribute assertions about the users. Single Sign-On (SSO), Lightweight Directory Access Protocol (LDAP), and Open Authentication (OAuth) are not IDaaS solutions, but technologies or protocols that can be used or supported by IDaaS solutions, respectively. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5: Security Engineering, page 654; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 3: Security Architecture and Engineering, page 437.
Why is planning in Disaster Recovery (DR) an interactive process?
It details off-site storage plans
It identifies omissions in the plan
It defines the objectives of the plan
It forms part of the awareness process
Planning in Disaster Recovery (DR) is an interactive process because it identifies omissions in the plan. DR planning is the process of developing and implementing procedures and processes to ensure that an organization can quickly resume its critical functions after a disaster or a disruption. DR planning involves various steps, such as conducting a risk assessment, performing a business impact analysis, defining the recovery objectives and strategies, designing and developing the DR plan, testing and validating the DR plan, and maintaining and updating the DR plan. DR planning is an interactive process because it requires constant feedback and communication among the stakeholders, such as the management, the employees, the customers, the suppliers, and the regulators. DR planning also requires regular reviews and evaluations of the plan to identify and address any gaps, errors, or changes that may affect the effectiveness or the feasibility of the plan. DR planning is not an interactive process because it details off-site storage plans, defines the objectives of the plan, or forms part of the awareness process, although these may be related or important aspects of DR planning. Detailing off-site storage plans is a technique that involves storing copies of the essential data, documents, or equipment at a secure and remote location, such as a vault, a warehouse, or a cloud service. Detailing off-site storage plans can provide some benefits for DR planning, such as enhancing the availability and the integrity of the data, documents, or equipment, preventing data loss or corruption, and facilitating the recovery and the restoration process. However, detailing off-site storage plans is not the reason why DR planning is an interactive process, as it is not a feedback or a communication mechanism, and it does not identify or address any omissions in the plan. Defining the objectives of the plan is a step that involves establishing the goals and the priorities of the DR plan, such as the recovery time objective (RTO), the recovery point objective (RPO), the maximum tolerable downtime (MTD), or the minimum operating level (MOL). Defining the objectives of the plan can provide some benefits for DR planning, such as aligning the DR plan with the business needs and expectations, setting the scope and the boundaries of the DR plan, and measuring the performance and the outcomes of the DR plan. However, defining the objectives of the plan is not the reason why DR planning is an interactive process, as it is not a feedback or a communication mechanism, and it does not identify or address any omissions in the plan. Forming part of the awareness process is a technique that involves educating and informing the stakeholders about the DR plan, such as the purpose, the scope, the roles, the responsibilities, or the procedures of the DR plan. Forming part of the awareness process can provide some benefits for DR planning, such as improving the knowledge and the skills of the stakeholders, changing the attitudes and the behaviors of the stakeholders, and empowering the stakeholders to make informed and secure decisions regarding the DR plan. However, forming part of the awareness process is not the reason why DR planning is an interactive process, as it is not a feedback or a communication mechanism, and it does not identify or address any omissions in the plan.
Who is responsible for the protection of information when it is shared with or provided to other organizations?
Systems owner
Authorizing Official (AO)
Information owner
Security officer
The information owner is the person who has the authority and responsibility for the information within an Information System (IS). The information owner is responsible for the protection of information when it is shared with or provided to other organizations, such as by defining the classification, sensitivity, retention, and disposal of the information, as well as by approving or denying the access requests and periodically reviewing the access rights. The system owner, the authorizing official, and the security officer are not responsible for the protection of information when it is shared with or provided to other organizations, although they may have roles and responsibilities related to the security and operation of the IS. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1: Security and Risk Management, page 48; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 1: Security and Risk Management, page 40.
What is the PRIMARY goal of fault tolerance?
Elimination of single point of failure
Isolation using a sandbox
Single point of repair
Containment to prevent propagation
The primary goal of fault tolerance is to eliminate single point of failure, which is any component or resource that is essential for the operation or the functionality of a system or a network, and that can cause the entire system or network to fail or malfunction if it fails or malfunctions itself. Fault tolerance is the ability of a system or a network to suffer a fault but continue to operate, by adding redundant or backup components or resources that can take over or replace the failed or malfunctioning component or resource, without affecting the performance or the quality of the system or network. Fault tolerance can provide some benefits for security, such as enhancing the availability and the reliability of the system or network, preventing or mitigating some types of attacks or vulnerabilities, and supporting the audit and the compliance activities. Fault tolerance can be implemented using various methods or techniques, such as:
Isolation using a sandbox, single point of repair, and containment to prevent propagation are not the primary goals of fault tolerance, although they may be related or possible outcomes or benefits of fault tolerance. Isolation using a sandbox is a security concept or technique that involves executing or testing a program or a code in a separate or a restricted environment, such as a virtual machine or a container, to protect the system or the network from any potential harm or damage that the program or the code may cause, such as malware, viruses, worms, or trojans. Isolation using a sandbox can provide some benefits for security, such as enhancing the confidentiality and the integrity of the system or the network, preventing or mitigating some types of attacks or vulnerabilities, and supporting the audit and the compliance activities. However, isolation using a sandbox is not the primary goal of fault tolerance, as it is not a method or a technique of adding redundant or backup components or resources to the system or the network, and it does not address the availability or the reliability of the system or the network. Single point of repair is a security concept or technique that involves identifying or locating the component or the resource that is responsible for the failure or the malfunction of the system or the network, and that can restore or recover the system or the network if it is repaired or replaced, such as a disk, a server, or a router. Single point of repair can provide some benefits for security, such as enhancing the availability and the reliability of the system or the network, preventing or mitigating some types of attacks or vulnerabilities, and supporting the audit and the compliance activities. However, single point of repair is not the primary goal of fault tolerance, as it is not a method or a technique of adding redundant or backup components or resources to the system or the network, and it does not prevent or eliminate the failure or the malfunction of the system or the network. Containment to prevent propagation is a security concept or technique that involves isolating or restricting the component or the resource that is affected or infected by a fault or an attack, such as a malware, a virus, a worm, or a trojan, to prevent or mitigate the spread or the transmission of the fault or the attack to other components or resources of the system or the network, such as by disconnecting, disabling, or quarantining the component or the resource. Containment to prevent propagation can provide some benefits for security, such as enhancing the confidentiality and the integrity of the system or the network, preventing or mitigating some types of attacks or vulnerabilities, and supporting the audit and the compliance activities. However, containment to prevent propagation is not the primary goal of fault tolerance, as it is not a method or a technique of adding redundant or backup components or resources to the system or the network, and it does not ensure or improve the performance or the quality of the system or the network.
Attack trees are MOST useful for which of the following?
Determining system security scopes
Generating attack libraries
Enumerating threats
Evaluating Denial of Service (DoS) attacks
Attack trees are most useful for enumerating threats. Attack trees are graphical models that represent the possible ways that an attacker can exploit a system or achieve a goal. Attack trees consist of nodes that represent the attacker’s actions or conditions, and branches that represent the logical relationships between the nodes. Attack trees can help to enumerate the threats that the system faces, as well as to analyze the likelihood, impact, and countermeasures of each threat. Attack trees are not useful for determining system security scopes, generating attack libraries, or evaluating DoS attacks, although they may be used as inputs or outputs for these tasks. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 4: Security Operations, page 499; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 4: Communication and Network Security, page 552.
A Security Operations Center (SOC) receives an incident response notification on a server with an active
intruder who has planted a backdoor. Initial notifications are sent and communications are established.
What MUST be considered or evaluated before performing the next step?
Notifying law enforcement is crucial before hashing the contents of the server hard drive
Identifying who executed the incident is more important than how the incident happened
Removing the server from the network may prevent catching the intruder
Copying the contents of the hard drive to another storage device may damage the evidence
Before performing the next step in an incident response, it must be considered or evaluated that removing the server from the network may prevent catching the intruder who has planted a backdoor. This is because the intruder may still be connected to the server or may try to reconnect later, and disconnecting the server may alert the intruder or lose the opportunity to trace the intruder’s source or identity. Therefore, it may be better to isolate the server from the network or monitor the network traffic to gather more evidence and information about the intruder. Notifying law enforcement, identifying who executed the incident, and copying the contents of the hard drive are not the factors that must be considered or evaluated before performing the next step, as they are either irrelevant or premature at this stage of the incident response. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8: Software Development Security, page 973; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 8: Security Operations, page 1019.
In a change-controlled environment, which of the following is MOST likely to lead to unauthorized changes to
production programs?
Modifying source code without approval
Promoting programs to production without approval
Developers checking out source code without approval
Developers using Rapid Application Development (RAD) methodologies without approval
In a change-controlled environment, the activity that is most likely to lead to unauthorized changes to production programs is promoting programs to production without approval. A change-controlled environment is an environment that follows a specific process or a procedure for managing and tracking the changes to the hardware and software components of a system or a network, such as the configuration, the functionality, or the security of the system or the network. A change-controlled environment can provide some benefits for security, such as enhancing the performance and the functionality of the system or the network, preventing or mitigating some types of attacks or vulnerabilities, and supporting the audit and the compliance activities. A change-controlled environment can involve various steps and roles, such as:
Promoting programs to production without approval is the activity that is most likely to lead to unauthorized changes to production programs, as it violates the change-controlled environment process and procedure, and it introduces potential risks or issues to the system or the network. Promoting programs to production without approval means that the code or the program of the system or the network is moved or transferred from the development or the testing environment to the production or the operational environment, without obtaining the necessary or the sufficient authorization or consent from the relevant or the responsible parties, such as the change manager, the change review board, or the change advisory board. Promoting programs to production without approval can lead to unauthorized changes to production programs, as it can result in the following consequences:
Who is essential for developing effective test scenarios for disaster recovery (DR) test plans?
Business line management and IT staff members
Chief Information Officer (CIO) and DR manager
DR manager end IT staff members
IT staff members and project managers
Business line management and IT staff members are essential for developing effective test scenarios for DR test plans. Business line management can provide the business requirements, priorities, and impact analysis for the critical processes and functions that need to be recovered in the event of a disaster. IT staff members can provide the technical expertise, resources, and support for the recovery of the IT infrastructure and systems. Together, they can design realistic and comprehensive test scenarios that can validate the effectiveness and readiness of the DR plan. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 7: Security Operations, page 411. CISSP Practice Exam – FREE 20 Questions and Answers, Question 15.
Which of the following is the PRIMARY goal of logical access controls?
Restrict access to an information asset.
Ensure integrity of an information asset.
Restrict physical access to an information asset.
Ensure availability of an information asset.
Logical access controls are the policies, procedures, and mechanisms that regulate who can access an information asset and what actions they can perform on it. The primary goal of logical access controls is to restrict access to an information asset based on the principle of least privilege, which means that users should only have the minimum level of access required to perform their tasks. Ensuring integrity, restricting physical access, and ensuring availability are also important goals of information security, but they are not the primary goal of logical access controls. References: [CISSP CBK Reference, 5th Edition, Chapter 5, page 263]; [CISSP All-in-One Exam Guide, 8th Edition, Chapter 5, page 235]
Which of the following objects should be removed FIRST prior to uploading code to public code repositories?
Security credentials
Known vulnerabilities
Inefficient algorithms
Coding mistakes
Security credentials are the objects that should be removed first prior to uploading code to public code repositories. Security credentials are any data or information that can be used to authenticate or authorize a user or a system, such as passwords, keys, tokens, certificates, or hashes. Security credentials should never be exposed or stored in plain text in the source code, as they can be easily compromised by attackers who can access the public code repositories. Security credentials should be removed or replaced with dummy values before uploading code to public code repositories, and stored securely in a separate location, such as a vault or a configuration file. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8: Software Development Security, page 419; [Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 8: Software Development Security, page 559]
A company-wide penetration test result shows customers could access and read files through a web browser. Which of the following can be used to mitigate this vulnerability?
Enforce the chmod of files to 755.
Enforce the control of file directory listings.
Implement access control on the web server.
Implement Secure Sockets Layer (SSL) certificates throughout the web server.
A company-wide penetration test result shows customers could access and read files through a web browser. This indicates that the web server is not configured properly to prevent unauthorized access to the file system. One of the ways to mitigate this vulnerability is to enforce the control of file directory listings, which means disabling the option that allows the web server to display the contents of a directory when there is no index file (such as index.html or index.php) present. This way, customers cannot browse or download the files that are not intended to be served by the web server. Enforcing the chmod of files to 755 is not a sufficient solution, as it only changes the permissions of the files to allow the owner to read, write, and execute, the group to read and execute, and others to read and execute. This does not prevent the web server from listing the files or serving them to the customers. Implementing access control on the web server is a good practice, but it may not be enough to prevent customers from accessing and reading files through a web browser, as some files may be accessible to anyone by default. Implementing Secure Sockets Layer (SSL) certificates throughout the web server is also a good practice, but it does not address the issue of unauthorized access to the file system, as it only encrypts the communication between the web server and the web browser. References: Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 4: Communication and Network Security, page 163. CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5: Network and Internet Security, page 287.
A software development company has a short timeline in which to deliver a software product. The software development team decides to use open-source software libraries to reduce the development time. What concept should software developers consider when using open-source software libraries?
Open source libraries contain known vulnerabilities, and adversaries regularly exploit those vulnerabilities in the wild.
Open source libraries can be used by everyone, and there is a common understanding that the vulnerabilities in these libraries will not be exploited.
Open source libraries are constantly updated, making it unlikely that a vulnerability exists for an adversary to exploit.
Open source libraries contain unknown vulnerabilities, so they should not be used.
The concept that software developers should consider when using open-source software libraries is that open source libraries contain known vulnerabilities, and adversaries regularly exploit those vulnerabilities in the wild. Open source software libraries are collections of reusable code or functions that are publicly available and free to use or modify by anyone. Open source software libraries can reduce the development time of a software product, as they provide ready-made solutions or features that the software developers can integrate into their own code. However, open source software libraries also pose security risks, as they may contain known vulnerabilities that are publicly disclosed and documented, and that adversaries can easily exploit to compromise the software product or the system. Software developers should consider this concept when using open source software libraries, and they should perform due diligence, such as checking the reputation, quality, and security of the open source software libraries, updating them regularly, and testing them for vulnerabilities or compatibility issues. Open source software libraries cannot be used by everyone, and there is no common understanding that the vulnerabilities in these libraries will not be exploited, as this is a false and naive assumption that ignores the reality and the nature of the cyberthreat landscape. Open source software libraries are not constantly updated, making it unlikely that a vulnerability exists for an adversary to exploit, as this is also a false and optimistic assumption that overlooks the fact that some open source software libraries may be outdated, abandoned, or poorly maintained by their developers or communities. Open source software libraries do not contain unknown vulnerabilities, so they should not be used, as this is a false and pessimistic assumption that disregards the benefits and the potential of the open source software libraries, and that implies that proprietary or closed source software libraries are free of vulnerabilities, which is not true. References: Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 21: Software Development Security, page 2016.
Which of the following is the MOST comprehensive Business Continuity (BC) test?
Full functional drill
Full table top
Full simulation
Full interruption
The most comprehensive Business Continuity (BC) test is the full interruption test. A full interruption test is a type of BC test that involves shutting down the primary site or system and activating the alternate site or system, as if a real disaster has occurred. A full interruption test is the most realistic and effective way to evaluate the BC plan, as it tests the actual recovery capabilities and performance of the organization under a simulated disaster scenario. A full interruption test can measure the recovery time objective (RTO), recovery point objective (RPO), and other metrics of the BC plan, as well as identify any gaps, issues, or weaknesses in the plan. A full interruption test can also test the communication, coordination, and documentation of the BC team, as well as the awareness, readiness, and resilience of the organization. However, a full interruption test is also the most risky and costly type of BC test, as it can disrupt the normal operations and services of the organization, cause potential loss of revenue or reputation, and expose the organization to legal or regulatory liabilities. Therefore, a full interruption test should be carefully planned, approved, and executed, with proper backup, contingency, and rollback measures in place. A full functional drill, a full table top, and a full simulation are not the most comprehensive BC tests. A full functional drill is a type of BC test that involves performing the actual recovery procedures and tasks at the alternate site or system, without shutting down the primary site or system. A full functional drill is less realistic and effective than a full interruption test, as it does not test the actual switch-over and switch-back processes of the BC plan. A full table top is a type of BC test that involves discussing and reviewing the BC plan and procedures with the BC team and stakeholders, using a simulated disaster scenario. A full table top is less realistic and effective than a full interruption test, as it does not test the actual execution and performance of the BC plan. A full simulation is a type of BC test that involves simulating the recovery environment and activities at the alternate site or system, using a computer model or a virtual machine, without shutting down the primary site or system. A full simulation is less realistic and effective than a full interruption test, as it does not test the actual recovery capabilities and performance of the organization under a simulated disaster scenario. References: Official (ISC)2 CISSP CBK Reference, Fifth Edition, Domain 7, Security Operations, page 736. CISSP All-in-One Exam Guide, Eighth Edition, Chapter 7, Security Operations, page 697.
Which of the following management processes allots ONLY those services required for users to accomplish their tasks, change default user passwords, and set servers to retrieve antivirus updates?
Compliance
Configuration
Identity
Patch
Configuration is the process of setting up and maintaining the parameters, settings, and options of a system or network to ensure its optimal performance and security. Configuration management is the process of controlling and documenting the changes made to the configuration of a system or network. Configuration management can help to allot only those services required for users to accomplish their tasks, change default user passwords, and set servers to retrieve antivirus updates. These actions can help to enhance the security, functionality, and efficiency of the system or network, as well as to prevent unauthorized or unnecessary access or use of the system or network resources. Compliance, identity, or patch are not the best management processes to allot only those services required for users to accomplish their tasks, change default user passwords, and set servers to retrieve antivirus updates, as they are more related to the audit, access control, or vulnerability management aspects of security. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 11: Security Operations, page 665; CISSP Official (ISC)2 Practice Tests, Third Edition, Domain 7: Security Operations, Question 7.9, page 273.
Which of the following is used to ensure that data mining activities Will NOT reveal sensitive data?
Implement two-factor authentication on the underlying infrastructure.
Encrypt data at the field level and tightly control encryption keys.
Preprocess the databases to see if inn …… can be disclosed from the learned patterns.
Implement the principle of least privilege on data elements so a reduced number of users can access the database.
Encrypting data at the field level ensures that sensitive information remains confidential and secure even during data mining activities. Tight control over encryption keys further ensures that only authorized personnel can access and decrypt sensitive data. References: Unable to provide specific references due to browsing limitations.
What is the BEST method if an investigator wishes to analyze a hard drive which may be used as evidence?
Leave the hard drive in place and use only verified and authenticated Operating Systems (OS) utilities ...
Log into the system and immediately make a copy of all relevant files to a Write Once, Read Many ...
Remove the hard drive from the system and make a copy of the hard drive's contents using imaging hardware.
Use a separate bootable device to make a copy of the hard drive before booting the system and analyzing the hard drive.
The best method if an investigator wishes to analyze a hard drive which may be used as evidence is to remove the hard drive from the system and make a copy of the hard drive’s contents using imaging hardware. Imaging hardware is a device that can create a bit-by-bit copy of the hard drive’s contents, including the deleted or hidden files, without altering or damaging the original hard drive. Imaging hardware can also verify the integrity of the copy by generating and comparing the hash values of the original and the copy. The copy can then be used for analysis, while the original can be preserved and stored in a secure location. This method can help to ensure the authenticity, reliability, and admissibility of the hard drive as evidence, as well as to prevent any potential tampering or contamination of the hard drive. Leaving the hard drive in place and using only verified and authenticated Operating Systems (OS) utilities, logging into the system and immediately making a copy of all relevant files to a Write Once, Read Many, or using a separate bootable device to make a copy of the hard drive before booting the system and analyzing the hard drive are not the best methods if an investigator wishes to analyze a hard drive which may be used as evidence, as they are either risky, incomplete, or inefficient methods that may compromise the integrity, validity, or quality of the hard drive as evidence. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 19: Digital Forensics, page 1049; CISSP Official (ISC)2 Practice Tests, Third Edition, Domain 8: Software Development Security, Question 8.15, page 306.
Which of the following is the PRIMARY issue when analyzing detailed log information?
Logs may be unavailable when required
Timely review of the data is potentially difficult
Most systems and applications do not support logging
Logs do not provide sufficient details of system and individual activities
The primary issue when analyzing detailed log information is that timely review of the data is potentially difficult. Log information is the record of events or activities that occur on a system or a network. Log information can provide valuable insights into the performance, security, and usage of the system or network. However, log information can also be voluminous, complex, and heterogeneous, making it challenging to review and analyze in a timely manner. Without proper tools and techniques, log analysis can be time-consuming, resource-intensive, and error-prone. The other options are not the primary issue when analyzing detailed log information. Logs may be unavailable when required, but this is more of a log management issue than a log analysis issue. Most systems and applications do support logging, but the level and quality of logging may vary. Logs do provide sufficient details of system and individual activities, but the details may not be easy to interpret or correlate. References: Official (ISC)2 CISSP CBK Reference, Fifth Edition, Domain 7: Security Operations, p. 881-882; CISSP All-in-One Exam Guide, Eighth Edition, Chapter 7: Security Operations, p. 467-468.
Which of the following minimizes damage to information technology (IT) equipment stored in a data center when a false fire alarm event occurs?
A pre-action system is installed.
An open system is installed.
A dry system is installed.
A wet system is installed.
A pre-action system is a type of fire suppression system that minimizes damage to IT equipment stored in a data center when a false fire alarm event occurs. A pre-action system consists of sprinkler heads, pipes filled with pressurized air or nitrogen, and a separate water supply. The system is activated only when two conditions are met: a fire detection device (such as a smoke detector or a heat sensor) triggers an alarm, and the sprinkler heads detect a rise in temperature. When both conditions are met, the system releases the air or nitrogen from the pipes, and then fills them with water. The water is then discharged through the sprinkler heads that have been activated by the heat. A pre-action system prevents accidental water damage to IT equipment caused by false alarms, pipe leaks, or mechanical failures, as the pipes are normally dry and the water is released only when a fire is confirmed. A pre-action system also allows time for manual intervention or verification before the water is released, which can further reduce the risk of unnecessary water damage. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 9: Communication and Network Security, page 600. Official (ISC)² CISSP CBK Reference, Fifth Edition, Domain 4: Communication and Network Security, page 713.
Which of the following examples is BEST to minimize the attack surface for a customer's private information?
Obfuscation
Collection limitation
Authentication
Data masking
The best example to minimize the attack surface for a customer’s private information is collection limitation. Collection limitation is a principle of data protection that states that the collection of personal data should be limited to the minimum necessary for the specified purpose, and that the data should be obtained by lawful and fair means, with the consent of the data subject. Collection limitation reduces the attack surface for a customer’s private information, as it reduces the amount and scope of the data that is exposed to potential threats, and ensures that the data is collected in a legitimate and transparent manner. Obfuscation, authentication, and data masking are not examples of minimizing the attack surface, but rather examples of protecting the data that is already collected. Obfuscation is a technique of obscuring or hiding the meaning or intent of the data, such as by using encryption, hashing, or encoding. Authentication is a process of verifying the identity or credentials of a user or a system that requests access to the data. Data masking is a technique of replacing or modifying the sensitive data with fictitious or anonymized data, such as by using pseudonymization, tokenization, or generalization. References: Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 2: Asset Security, page 115.
Two remote offices need to be connected securely over an untrustworthy MAN. Each office needs to access network shares at the other site. Which of the following will BEST provide this functionality?
Client-to-site VPN
Third-party VPN service
Site-to-site VPN
Split-tunnel VPN
A site-to-site VPN is a type of VPN that connects two or more remote networks securely over an untrustworthy network, such as the Internet. A site-to-site VPN uses a VPN gateway at each site to establish an encrypted tunnel between the networks, allowing the devices at each site to access the network resources at the other site. A site-to-site VPN is the best option to provide the functionality of connecting two remote offices securely over an untrustworthy MAN and allowing each office to access network shares at the other site. The other options are not types of VPN that can provide the same functionality, as they either do not connect networks, do not use VPN gateways, or do not separate the Internet and corporate traffic. References: CISSP - Certified Information Systems Security Professional, Domain 4. Communication and Network Security, 4.2 Secure network components, 4.2.1 Establish secure communication channels, 4.2.1.2 Transmission methods; CISSP Exam Outline, Domain 4. Communication and Network Security, 4.2 Secure network components, 4.2.1 Establish secure communication channels, 4.2.1.2 Transmission methods
What is maintained by using write blocking devices whan forensic evidence is examined?
Inventory
lntegrity
Confidentiality
Availability
Write blocking devices are used to prevent any modification of the forensic evidence when it is examined. This preserves the integrity of the evidence and ensures its admissibility in court. Write blocking devices do not affect the inventory, confidentiality, or availability of the evidence. References: 1, p. 1030; [4], p. 17
Which of the following provides the MOST secure method for Network Access Control (NAC)?
Media Access Control (MAC) filtering
802.IX authentication
Application layer filtering
Network Address Translation (NAT)
Network Access Control (NAC) is a method of controlling and enforcing the security policies on the devices that connect to a network. NAC can prevent unauthorized or noncompliant devices from accessing the network resources, and can also monitor and remediate the devices that are already connected. The most secure method for NAC is 802.1X authentication, which is a standard for port-based network access control. 802.1X authentication uses the Extensible Authentication Protocol (EAP) to authenticate the devices (called supplicants) that request access to a network port (called an authenticator), such as a switch or a wireless access point. The authentication process involves a third party (called an authentication server), such as a RADIUS or a TACACS+ server, that verifies the credentials of the supplicant and grants or denies access to the network. 802.1X authentication provides strong security for NAC, as it prevents unauthorized devices from accessing the network, and can also enforce dynamic policies, such as VLAN assignment, encryption, or quarantine. Media Access Control (MAC) filtering is a less secure method for NAC, as it only checks the MAC address of the device, which can be easily spoofed or bypassed. Application layer filtering is a method of inspecting the content and the context of the network traffic at the application layer, but it does not provide NAC for the devices that connect to the network. Network Address Translation (NAT) is a method of modifying the IP addresses of the network packets, but it does not provide NAC for the devices that connect to the network. References: Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 4: Communication and Network Security, page 157. CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5: Network and Internet Security, page 276.
Which of the following is a secure design principle for a new product?
Build in appropriate levels of fault tolerance.
Utilize obfuscation whenever possible.
Do not rely on previously used code.
Restrict the use of modularization.
Building in appropriate levels of fault tolerance is a secure design principle for a new product. Fault tolerance is the ability of a system or a component to continue functioning correctly and reliably in the event of a failure, error, or disruption, such as a hardware malfunction, a power outage, or a network attack. Building in appropriate levels of fault tolerance can enhance the security, availability, and resilience of a system or a product, by providing mechanisms such as redundancy, backup, recovery, failover, or self-healing. Building in appropriate levels of fault tolerance can also prevent or mitigate the impact of potential threats or vulnerabilities, and ensure the continuity of the business operations and services. The other options are not secure design principles for a new product, as they either do not enhance the security, availability, or resilience of the system or the product, or do not follow the best practices or standards for software development. References: CISSP - Certified Information Systems Security Professional, Domain 3. Security Architecture and Engineering, 3.5 Implement and manage engineering processes using secure design principles, 3.5.1 Understand the fundamental concepts of security models, 3.5.1.1 Fault tolerance; CISSP Exam Outline, Domain 3. Security Architecture and Engineering, 3.5 Implement and manage engineering processes using secure design principles, 3.5.1 Understand the fundamental concepts of security models, 3.5.1.1 Fault tolerance
Which of the following is the MAIN difference between a network-based firewall and a host-based firewall?
A network-based firewall is stateful, while a host-based firewall is stateless.
A network-based firewall controls traffic passing through the device, while a host-based firewall controls traffic destined for the device.
A network-based firewall verifies network traffic, while a host-based firewall verifies processes and applications.
A network-based firewall blocks network intrusions, while a host-based firewall blocks malware.
The main difference between a network-based firewall and a host-based firewall is that a network-based firewall controls traffic passing through the device, while a host-based firewall controls traffic destined for the device. A firewall is a device or a software that filters and regulates the network traffic based on a set of rules or policies, and that blocks or allows the traffic based on the source, destination, protocol, port, or content of the traffic. A network-based firewall is a type of firewall that is deployed at the network perimeter or the network segment, and that controls the traffic that passes through the device, such as the traffic that enters or exits the network, or the traffic that moves between different network zones or subnets. A host-based firewall is a type of firewall that is installed on a specific host or system, such as a server, a workstation, or a mobile device, and that controls the traffic that is destined for the device, such as the traffic that originates from or terminates at the device, or the traffic that is related to the applications or processes running on the device. The other options are not the main difference between a network-based firewall and a host-based firewall, as they either do not describe the characteristics or functions of the firewalls, or do not apply to both types of firewalls. References: CISSP - Certified Information Systems Security Professional, Domain 4. Communication and Network Security, 4.2 Secure network components, 4.2.2 Implement secure communication channels, 4.2.2.1 Network and host-based firewalls; CISSP Exam Outline, Domain 4. Communication and Network Security, 4.2 Secure network components, 4.2.2 Implement secure communication channels, 4.2.2.1 Network and host-based firewalls
What is the BEST control to be implemented at a login page in a web application to mitigate the ability to enumerate users?
Implement a generic response for a failed login attempt.
Implement a strong password during account registration.
Implement numbers and special characters in the user name.
Implement two-factor authentication (2FA) to login process.
The best control to be implemented at a login page in a web application to mitigate the ability to enumerate users is to implement a generic response for a failed login attempt. User enumeration is a technique that allows an attacker to discover the valid usernames or email addresses of the users of a web application, by exploiting the differences in the responses or messages from the login page. For example, if the login page displays a specific message such as "Invalid username" or "Invalid password" when a user enters an incorrect username or password, the attacker can use this information to guess or brute-force the valid usernames or passwords. To prevent user enumeration, the login page should implement a generic response for a failed login attempt, such as "Invalid username or password", regardless of whether the username or password is incorrect. This way, the attacker cannot distinguish between the valid and invalid usernames or passwords, and cannot enumerate the users of the web application. References: CISSP All-in-One Exam Guide, Chapter 8: Software Development Security, Section: Web Application Security, pp. 1066-1067.
Physical assets defined in an organization’s Business Impact Analysis (BIA) could include which of the following?
Personal belongings of organizational staff members
Supplies kept off-site at a remote facility
Cloud-based applications
Disaster Recovery (DR) line-item revenues
Supplies kept off-site at a remote facility are physical assets that could be defined in an organization’s Business Impact Analysis (BIA). A BIA is a process that involves identifying and evaluating the potential impacts of various disruptions or disasters on the organization’s critical business functions and processes, and determining the recovery priorities and objectives for the organization. A BIA can help the organization plan and prepare for the continuity and the resilience of its business operations in the event of a crisis. A physical asset is a tangible and valuable resource that is owned or controlled by the organization, and that supports its business activities and objectives. A physical asset could be a hardware, a software, a network, a data, a facility, a equipment, a material, or a personnel. Supplies kept off-site at a remote facility are physical assets that could be defined in a BIA, as they are resources that are essential for the organization’s business operations, and that could be affected by a disruption or a disaster. For example, the organization may need to access or use the supplies to resume or restore its business functions and processes, or to mitigate or recover from the impacts of the crisis. Therefore, the organization should include the supplies kept off-site at a remote facility in its BIA, and assess the potential impacts, risks, and dependencies of these assets on its business continuity and recovery. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 7: Security Operations, page 387. CISSP Practice Exam – FREE 20 Questions and Answers, Question 16.
Which of the following is the MOST effective corrective control to minimize the effects of a physical intrusion?
Automatic videotaping of a possible intrusion
Rapid response by guards or police to apprehend a possible intruder
Activating bright lighting to frighten away a possible intruder
Sounding a loud alarm to frighten away a possible intruder
A rapid response by guards or police to apprehend a possible intruder is the most effective corrective control to minimize the effects of a physical intrusion. A corrective control is a type of security control that aims to restore the normal operations and security level of a system or an organization after a security incident or violation has occurred. A physical intrusion is an unauthorized or unwanted entry into a physical facility or area that contains sensitive or valuable assets, such as IT equipment, data, or personnel. A physical intrusion can cause damage, theft, sabotage, or espionage to the assets, and compromise the confidentiality, integrity, or availability of the information or services provided by the organization. A rapid response by guards or police to apprehend a possible intruder can prevent or limit the extent of the damage or loss caused by the physical intrusion, and deter or punish the intruder. A rapid response can also provide evidence or information for further investigation or prosecution of the incident. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8: Security Operations, page 503. Official (ISC)² CISSP CBK Reference, Fifth Edition, Domain 7: Security Operations, page 835.
In systems security engineering, what does the security principle of modularity provide?
Documentation of functions
Isolated functions and data
Secure distribution of programs and data
Minimal access to perform a function
In systems security engineering, the security principle of modularity provides isolated functions and data. Modularity means dividing a system into well-defined logical units that have clear boundaries and interfaces. This helps to reduce complexity, improve maintainability, and enhance security. By isolating functions and data, modularity prevents unauthorized access, limits the impact of failures, and facilitates testing and verification. Modularity also supports the principle of least privilege, which grants the minimum access required to perform a function. References: CISSP Official Study Guide, 9th Edition, page 1019; CISSP All-in-One Exam Guide, 8th Edition, page 1098
What is the PRIMARY benefit of analyzing the partition layout of a hard disk volume when performing forensic analysis?
Sectors which are not assigned to a perform may contain data that was purposely hidden.
Volume address information for he hard disk may have been modified.
partition tables which are not completely utilized may contain data that was purposely hidden
Physical address information for the hard disk may have been modified.
The primary benefit of analyzing the partition layout of a hard disk volume when performing forensic analysis is to find data that was purposely hidden in unused or unallocated space. A partition is a logical division of a hard disk volume that can contain a file system, an operating system, or other data. A partition table is a data structure that stores information about the partitions, such as their size, location, type, and status. By analyzing the partition table, a forensic examiner can identify the partitions that are active, inactive, hidden, or deleted, and recover data from them. Sometimes, malicious users or attackers may hide data in partitions that are not completely utilized, such as slack space, free space, or unpartitioned space, to avoid detection or deletion. By analyzing the partition layout, a forensic examiner can discover and extract such data and use it as evidence. References:
The Chief Information Officer (CIO) has decided that as part of business modernization efforts the organization will move towards a cloud architecture. All business-critical data will be migrated to either internal or external cloud services within the next two years. The CIO has a PRIMARY obligation to work with personnel in which role in order to ensure proper protection of data during and after the cloud migration?
Information owner
General Counsel
Chief Information Security Officer (CISO)
Chief Security Officer (CSO)
The role that the CIO has a primary obligation to work with in order to ensure proper protection of data during and after the cloud migration is the Chief Information Security Officer (CISO). The CISO is the senior executive who is responsible for establishing, implementing, and maintaining the information security strategy, policies, and programs of the organization, and for ensuring the alignment and integration of the information security objectives and activities with the business goals and operations of the organization. The CISO is the role that the CIO has a primary obligation to work with in order to ensure proper protection of data during and after the cloud migration, as the CISO can provide the leadership, guidance, and expertise for the information security aspects of the cloud migration, such as the risk assessment, the security controls, the compliance requirements, the incident response, and the security monitoring of the cloud services . References: [CISSP CBK, Fifth Edition, Chapter 1, page 28]; [100 CISSP Questions, Answers and Explanations, Question 20].
When conducting a third-party risk assessment of a new supplier, which of the following reports should be reviewed to confirm the operating effectiveness of the security, availability, confidentiality, and privacy trust principles?
Service Organization Control (SOC) 1, Type 2
Service Organization Control (SOC) 2, Type 2
International Organization for Standardization (ISO) 27001
International Organization for Standardization (ISO) 27002
Service Organization Control (SOC) reports are audit reports that provide information about the internal controls and processes of a service organization, such as a cloud provider, a data center, or a payroll service. There are three types of SOC reports: SOC 1, SOC 2, and SOC 3. SOC 1 reports focus on the controls that affect the financial reporting of the user entities (the clients of the service organization). SOC 2 reports focus on the controls that affect the security, availability, confidentiality, and privacy of the user entities’ data and systems, as well as the processing integrity of the service organization. SOC 3 reports are similar to SOC 2 reports, but they are less detailed and more accessible to the general public. Each SOC report can be either Type 1 or Type 2. Type 1 reports describe the design and implementation of the controls at a specific point in time. Type 2 reports describe the operating effectiveness of the controls over a period of time, usually six to twelve months. When conducting a third-party risk assessment of a new supplier, the best report to review to confirm the operating effectiveness of the security, availability, confidentiality, and privacy trust principles is the SOC 2, Type 2 report. This report provides assurance that the service organization has implemented and maintained the controls that are relevant to the protection of the user entities’ data and systems, and that the controls have been tested and verified by an independent auditor. International Organization for Standardization (ISO) 27001 and ISO 27002 are not audit reports, but rather standards for information security management systems (ISMS). ISO 27001 specifies the requirements for establishing, implementing, maintaining, and improving an ISMS. ISO 27002 provides guidelines and best practices for implementing the controls of the ISMS. While these standards can be used as a reference for evaluating the security posture of a service organization, they do not provide the same level of assurance and evidence as a SOC 2, Type 2 report. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1: Security and Risk Management, p. 66-67. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Domain 1: Security and Risk Management, p. 103-104.
What is the MAIN objective of risk analysis in Disaster Recovery (DR) planning?
Establish Maximum Tolerable Downtime (MTD) Information Systems (IS).
Define the variable cost for extended downtime scenarios.
Identify potential threats to business availability.
Establish personnel requirements for various downtime scenarios.
Risk analysis is the process of identifying, assessing, and prioritizing the risks that could affect an organization’s assets, operations, or objectives. Risk analysis is an essential component of Disaster Recovery (DR) planning, as it helps to determine the likelihood and impact of various disaster scenarios, and to develop appropriate mitigation and recovery strategies. The main objective of risk analysis in DR planning is to identify potential threats to business availability, which is the ability of an organization to continue its critical business functions and processes in the event of a disaster. Business availability depends on the availability of the information systems (IS) that support the business functions and processes, such as hardware, software, data, network, and personnel. Risk analysis helps to identify the vulnerabilities and threats that could compromise the availability of the IS, and to estimate the potential losses and damages that could result from a disaster. Risk analysis also helps to establish the recovery point objective (RPO), which is the maximum acceptable amount of data loss after a disaster, and the recovery time objective (RTO), which is the maximum acceptable time to restore the normal operations after a disaster. References: Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 7: Security Operations, page 330. [CISSP All-in-One Exam Guide, Eighth Edition], Chapter 8: Business Continuity and Disaster Recovery Planning, page 461.
Which of the following are mandatory canons for the (ISC)* Code of Ethics?
Develop comprehensive security strategies for the organization.
Perform is, honestly, fairly, responsibly, and lawfully for the organization.
Create secure data protection policies to principals.
Provide diligent and competent service to principals.
The (ISC)* Code of Ethics is a set of principles and guidelines that govern the professional and ethical conduct of (ISC)* certified members and associates. The Code of Ethics consists of four mandatory canons, which are: Protect society, the common good, necessary public trust and confidence, and the infrastructure. Act honorably, honestly, justly, responsibly, and legally. Provide diligent and competent service to principals. Advance and protect the profession. The other two options are not mandatory canons, but rather examples of how to apply the canons in practice. Develop comprehensive security strategies for the organization is an example of how to protect society, the common good, necessary public trust and confidence, and the infrastructure. Create secure data protection policies for principals is an example of how to provide diligent and competent service to principals. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1: Security and Risk Management, p. 24-25. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Domain 1: Security and Risk Management, p. 19-20.
Which of the following is a common term for log reviews, synthetic transactions, and code reviews?
Security control testing
Application development
Spiral development functional testing
DevOps Integrated Product Team (IPT) development
Security control testing is a common term for various methods of assessing the effectiveness and compliance of security controls in an information system. Log reviews, synthetic transactions, and code reviews are examples of security control testing techniques that can be used to verify the functionality, performance, and security of an application or system. Log reviews involve analyzing the records of events and activities that occurred in an information system, such as user actions, system errors, or security incidents. Synthetic transactions are simulated user interactions with an application or system that are designed to test its functionality and performance under different scenarios and conditions. Code reviews are inspections of the source code of an application or system that are performed to identify errors, vulnerabilities, or deviations from best practices. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 10: Security Assessment and Testing, pp. 1015-1016; [Official (ISC)2 CISSP CBK Reference, Fifth Edition], Domain 6: Security Assessment and Testing, pp. 1019-1020.
Which of the following is the BEST way to protect against structured Query language (SQL) injection?
Enforce boundary checking.
Restrict use of SELECT command.
Restrict Hyper Text Markup Language (HTNL) source code access.
Use stored procedures.
The best way to protect against SQL injection is to use stored procedures. SQL injection is a type of attack that exploits a vulnerability in a web application that allows an attacker to inject malicious SQL commands into the input fields or parameters of the application. The attacker can then execute the SQL commands on the underlying database and access, modify, or delete the data. Stored procedures are precompiled SQL statements that are stored on the database server and can be invoked by the application. Stored procedures can prevent SQL injection by separating the SQL logic from the user input, validating the input parameters, and escaping the special characters that can alter the SQL syntax. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8: Software Development Security, page 424; [Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 8: Software Development Security, page 564]
In a quarterly system access review, an active privileged account was discovered that did not exist in the prior review on the production system. The account was created one hour after the previous access review. Which of the following is the BEST option to reduce overall risk in addition to quarterly access reviews?
Increase logging levels.
Implement bi-annual reviews.
Create policies for system access.
Implement and review risk-based alerts.
The best option to reduce overall risk in addition to quarterly access reviews is to implement and review risk-based alerts. Risk-based alerts are notifications that are triggered by predefined events or conditions that indicate a potential security breach or misuse of system resources. For example, a risk-based alert could be generated when a privileged account is created, modified, or deleted, or when a privileged account performs an unusual or unauthorized activity. By implementing and reviewing risk-based alerts, the organization can detect and respond to suspicious or malicious actions involving privileged accounts in a timely manner, and prevent or minimize the impact of security incidents. The other options are not as effective as risk-based alerts. Increasing logging levels may generate more data, but it does not necessarily improve the detection or analysis of security events. Implementing bi-annual reviews may reduce the time gap between access reviews, but it does not address the issue of detecting unauthorized changes or activities between reviews. Creating policies for system access is a good practice, but it does not ensure that the policies are enforced or complied with by the system users or administrators. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8: Security Operations, page 1043. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 7: Security Operations, page 1044.
At what stage of the Software Development Life Cycle (SDLC) does software vulnerability remediation MOST likely cost the least to implement?
Development
Testing
Deployme
Design
Software vulnerability remediation is the process of identifying and fixing the weaknesses or flaws in a software application or system that could be exploited by attackers. Software vulnerability remediation is most likely to cost the least to implement at the design stage of the Software Development Life Cycle (SDLC), which is the phase where the requirements and specifications of the software are defined and the architecture and components of the software are designed. At this stage, the software developers can apply security principles and best practices, such as secure by design, secure by default, and secure coding, to prevent or minimize the introduction of vulnerabilities in the software. Remediation at the design stage is also easier and cheaper than at later stages, such as development, testing, or deployment, because it does not require modifying or rewriting the existing code, which could introduce new errors or affect the functionality or performance of the software. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 21: Software Development Security, pp. 2021-2022; [Official (ISC)2 CISSP CBK Reference, Fifth Edition], Domain 8: Software Development Security, pp. 1395-1396.
When reviewing vendor certifications for handling and processing of company data, which of the following is the BEST Service Organization Controls (SOC) certification for the vendor to possess?
SOC 1 Type 1
SOC 2 Type 1
SOC 2 Type 2
SOC 3
The best Service Organization Controls (SOC) certification for the vendor to possess for handling and processing of company data is SOC 2 Type 2. SOC is a framework that defines the standards and criteria for the reporting and auditing of the internal controls and processes of a service organization, such as a cloud service provider, that affect the security, availability, processing integrity, confidentiality, or privacy of the information and systems of the user entities, such as the customers or clients of the service organization. SOC 2 Type 2 is a certification that indicates that the service organization has undergone an independent audit that evaluates and verifies the design and operating effectiveness of the internal controls and processes of the service organization, based on the Trust Services Criteria (TSC) of security, availability, processing integrity, confidentiality, or privacy, over a period of time, usually six to twelve months. SOC 2 Type 2 is the best SOC certification for the vendor to possess for handling and processing of company data, as it can provide the highest level of assurance and transparency for the security, reliability, and compliance of the vendor’s services . References: [CISSP CBK, Fifth Edition, Chapter 2, page 90]; [CISSP Practice Exam – FREE 20 Questions and Answers, Question 20].
A user is allowed to access the file labeled “Financial Forecast,” but only between 9:00 a.m. and 5:00 p.m., Monday through Friday. Which type of access mechanism should be used to accomplish this?
Minimum access control
Rule-based access control
Limited role-based access control (RBAC)
Access control list (ACL)
Rule-based access control is a type of access mechanism that uses predefined rules or policies to grant or deny access to resources based on certain conditions or criteria. Rule-based access control can be used to accomplish the requirement of allowing a user to access the file labeled “Financial Forecast,” but only between 9:00 a.m. and 5:00 p.m., Monday through Friday. The rule-based access control system can evaluate the attributes of the user, the file, and the environment, such as the identity, role, location, time, or date, and compare them with the rules or policies that specify the access conditions. For example, a rule-based access control policy could state that “Users in the finance department can access the file ‘Financial Forecast’ only from the office network and only during business hours.” If the user and the file match the criteria of the rule, then access is granted; otherwise, access is denied. References: CISSP All-in-One Exam Guide, Chapter 5: Identity and Access Management, Section: Access Control Models, pp. 403-404.
An organization has developed a way for customers to share information from their wearable devices with each other. Unfortunately, the users were not informed as to what information collected would be shared. What technical controls should be put in place to remedy the privacy issue while still trying to accomplish the organization's business goals?
Default the user to not share any information.
Inform the user of the sharing feature changes after implemented.
Share only what the organization decides is best.
Stop sharing data with the other users.
The privacy issue in this scenario is that the users were not informed as to what information collected would be shared with other users, and they did not give their consent to share their information. This violates the privacy principles of notice and choice, which state that the users should be notified of the data collection and sharing practices of the organization, and they should be given the option to opt-in or opt-out of the sharing feature. The technical control that should be put in place to remedy the privacy issue while still trying to accomplish the organization’s business goals is to default the user to not share any information, and to provide the user with a clear and easy way to enable the sharing feature if they wish to do so. This way, the user’s privacy is respected and protected, and the user can still participate in the sharing feature if they find it beneficial. The other answer choices are not appropriate, because they either do not address the privacy issue, or they do not support the organization’s business goals . References: [CISSP CBK, Fifth Edition, Chapter 1, page 35]; [CISSP Practice Exam – FREE 20 Questions and Answers, Question 20].
An attacker is able to remain indefinitely logged into a exploiting to remain on the web service?
Alert management
Password management
Session management
Identity management (IM)
Session management is the process of controlling and maintaining the state and information of a user’s interaction with a web service. It involves creating, maintaining, and terminating sessions, as well as ensuring their security and integrity. An attacker who is able to remain indefinitely logged into a web service is exploiting a weakness in session management, such as the lack of session expiration, session timeout, or session revocation. Alert management, password management, and identity management (IM) are all related to security, but they do not directly address the issue of session management.
A large corporation is locking for a solution to automate access based on where on request is coming from, who the user is, what device they are connecting with, and what time of day they are attempting this access. What type of solution would suit their needs?
Discretionary Access Control (DAC)
Role Based Access Control (RBAC)
Mandater Access Control (MAC)
Network Access Control (NAC)
The type of solution that would suit the needs of a large corporation that wants to automate access based on where the request is coming from, who the user is, what device they are connecting with, and what time of day they are attempting this access is Network Access Control (NAC). NAC is a solution that enables the enforcement of security policies and rules on the network level, by controlling the access of devices and users to the network resources. NAC can automate access based on various factors, such as the location, identity, role, device type, device health, or time of the request. NAC can also perform functions such as authentication, authorization, auditing, remediation, or quarantine of the devices and users that attempt to access the network. Discretionary Access Control (DAC), Role Based Access Control (RBAC), and Mandatory Access Control (MAC) are not types of solutions, but types of access control models that define how the access rights or permissions are granted or denied to the subjects or objects. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6: Communication and Network Security, page 737; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 4: Communication and Network Security, page 517.
A continuous information security-monitoring program can BEST reduce risk through which of the following?
Collecting security events and correlating them to identify anomalies
Facilitating system-wide visibility into the activities of critical user accounts
Encompassing people, process, and technology
Logging both scheduled and unscheduled system changes
A continuous information security monitoring program can best reduce risk through encompassing people, process, and technology. A continuous information security monitoring program is a process that involves maintaining the ongoing awareness of the security status, events, and activities of a system or network, by collecting, analyzing, and reporting the security data and information, using various methods and tools. A continuous information security monitoring program can provide several benefits, such as:
A continuous information security monitoring program can best reduce risk through encompassing people, process, and technology, because it can ensure that the continuous information security monitoring program is holistic and comprehensive, and that it covers all the aspects and elements of the system or network security. People, process, and technology are the three pillars of a continuous information security monitoring program, and they represent the following:
The other options are not the best ways to reduce risk through a continuous information security monitoring program, but rather specific or partial ways that can contribute to the risk reduction. Collecting security events and correlating them to identify anomalies is a specific way to reduce risk through a continuous information security monitoring program, but it is not the best way, because it only focuses on one aspect of the security data and information, and it does not address the other aspects, such as the security objectives and requirements, the security controls and measures, and the security feedback and improvement. Facilitating system-wide visibility into the activities of critical user accounts is a partial way to reduce risk through a continuous information security monitoring program, but it is not the best way, because it only covers one element of the system or network security, and it does not cover the other elements, such as the security threats and vulnerabilities, the security incidents and impacts, and the security response and remediation. Logging both scheduled and unscheduled system changes is a specific way to reduce risk through a continuous information security monitoring program, but it is not the best way, because it only focuses on one type of the security events and activities, and it does not focus on the other types, such as the security alerts and notifications, the security analysis and correlation, and the security reporting and documentation.
What is the MOST important step during forensic analysis when trying to learn the purpose of an unknown application?
Disable all unnecessary services
Ensure chain of custody
Prepare another backup of the system
Isolate the system from the network
Isolating the system from the network is the most important step during forensic analysis when trying to learn the purpose of an unknown application. An unknown application is an application that is not recognized or authorized by the system or network administrator, and that may have been installed or executed without the user’s knowledge or consent. An unknown application may have various purposes, such as:
Forensic analysis is a process that involves examining and investigating the system or network for any evidence or traces of the unknown application, such as its origin, nature, behavior, and impact. Forensic analysis can provide several benefits, such as:
Isolating the system from the network is the most important step during forensic analysis when trying to learn the purpose of an unknown application, because it can ensure that the system is isolated and protected from any external or internal influences or interferences, and that the forensic analysis is conducted in a safe and controlled environment. Isolating the system from the network can also help to:
The other options are not the most important steps during forensic analysis when trying to learn the purpose of an unknown application, but rather steps that should be done after or along with isolating the system from the network. Disabling all unnecessary services is a step that should be done after isolating the system from the network, because it can ensure that the system is optimized and simplified for the forensic analysis, and that the system resources and functions are not consumed or affected by any irrelevant or redundant services. Ensuring chain of custody is a step that should be done along with isolating the system from the network, because it can ensure that the integrity and authenticity of the evidence are maintained and documented throughout the forensic process, and that the evidence can be traced and verified. Preparing another backup of the system is a step that should be done after isolating the system from the network, because it can ensure that the system data and configuration are preserved and replicated for the forensic analysis, and that the system can be restored and recovered in case of any damage or loss.
Recovery strategies of a Disaster Recovery planning (DRIP) MUST be aligned with which of the following?
Hardware and software compatibility issues
Applications’ critically and downtime tolerance
Budget constraints and requirements
Cost/benefit analysis and business objectives
Recovery strategies of a Disaster Recovery planning (DRP) must be aligned with the cost/benefit analysis and business objectives. A DRP is a part of a BCP/DRP that focuses on restoring the normal operation of the organization’s IT systems and infrastructure after a disruption or disaster. A DRP should include various components, such as:
Recovery strategies of a DRP must be aligned with the cost/benefit analysis and business objectives, because it can ensure that the DRP is feasible and suitable, and that it can achieve the desired outcomes and objectives in a cost-effective and efficient manner. A cost/benefit analysis is a technique that compares the costs and benefits of different recovery strategies, and determines the optimal one that provides the best value for money. A business objective is a goal or a target that the organization wants to achieve through its IT systems and infrastructure, such as increasing the productivity, profitability, or customer satisfaction. A recovery strategy that is aligned with the cost/benefit analysis and business objectives can help to:
The other options are not the factors that the recovery strategies of a DRP must be aligned with, but rather factors that should be considered or addressed when developing or implementing the recovery strategies of a DRP. Hardware and software compatibility issues are factors that should be considered when developing the recovery strategies of a DRP, because they can affect the functionality and interoperability of the IT systems and infrastructure, and may require additional resources or adjustments to resolve them. Applications’ criticality and downtime tolerance are factors that should be addressed when implementing the recovery strategies of a DRP, because they can determine the priority and urgency of the recovery for different applications, and may require different levels of recovery objectives and resources. Budget constraints and requirements are factors that should be considered when developing the recovery strategies of a DRP, because they can limit the availability and affordability of the IT resources and funds for the recovery, and may require trade-offs or compromises to balance them.
What would be the MOST cost effective solution for a Disaster Recovery (DR) site given that the organization’s systems cannot be unavailable for more than 24 hours?
Warm site
Hot site
Mirror site
Cold site
A warm site is the most cost effective solution for a disaster recovery (DR) site given that the organization’s systems cannot be unavailable for more than 24 hours. A DR site is a backup facility that can be used to restore the normal operation of the organization’s IT systems and infrastructure after a disruption or disaster. A DR site can have different levels of readiness and functionality, depending on the organization’s recovery objectives and budget. The main types of DR sites are:
A warm site is the most cost effective solution for a disaster recovery (DR) site given that the organization’s systems cannot be unavailable for more than 24 hours, because it can provide a balance between the recovery time and the recovery cost. A warm site can enable the organization to resume its critical functions and operations within a reasonable time frame, without spending too much on the DR site maintenance and operation. A warm site can also provide some flexibility and scalability for the organization to adjust its recovery strategies and resources according to its needs and priorities.
The other options are not the most cost effective solutions for a disaster recovery (DR) site given that the organization’s systems cannot be unavailable for more than 24 hours, but rather solutions that are either too costly or too slow for the organization’s recovery objectives and budget. A hot site is a solution that is too costly for a disaster recovery (DR) site given that the organization’s systems cannot be unavailable for more than 24 hours, because it requires the organization to invest a lot of money on the DR site equipment, software, and services, and to pay for the ongoing operational and maintenance costs. A hot site may be more suitable for the organization’s systems that cannot be unavailable for more than a few hours or minutes, or that have very high availability and performance requirements. A mirror site is a solution that is too costly for a disaster recovery (DR) site given that the organization’s systems cannot be unavailable for more than 24 hours, because it requires the organization to duplicate its entire primary site, with the same hardware, software, data, and applications, and to keep them online and synchronized at all times. A mirror site may be more suitable for the organization’s systems that cannot afford any downtime or data loss, or that have very strict compliance and regulatory requirements. A cold site is a solution that is too slow for a disaster recovery (DR) site given that the organization’s systems cannot be unavailable for more than 24 hours, because it requires the organization to spend a lot of time and effort on the DR site installation, configuration, and restoration, and to rely on other sources of backup data and applications. A cold site may be more suitable for the organization’s systems that can be unavailable for more than a few days or weeks, or that have very low criticality and priority.
When is a Business Continuity Plan (BCP) considered to be valid?
When it has been validated by the Business Continuity (BC) manager
When it has been validated by the board of directors
When it has been validated by all threat scenarios
When it has been validated by realistic exercises
A Business Continuity Plan (BCP) is considered to be valid when it has been validated by realistic exercises. A BCP is a part of a BCP/DRP that focuses on ensuring the continuous operation of the organization’s critical business functions and processes during and after a disruption or disaster. A BCP should include various components, such as:
A BCP is considered to be valid when it has been validated by realistic exercises, because it can ensure that the BCP is practical and applicable, and that it can achieve the desired outcomes and objectives in a real-life scenario. Realistic exercises are a type of testing, training, and exercises that involve performing and practicing the BCP with the relevant stakeholders, using simulated or hypothetical scenarios, such as a fire drill, a power outage, or a cyberattack. Realistic exercises can provide several benefits, such as:
The other options are not the criteria for considering a BCP to be valid, but rather the steps or parties that are involved in developing or approving a BCP. When it has been validated by the Business Continuity (BC) manager is not a criterion for considering a BCP to be valid, but rather a step that is involved in developing a BCP. The BC manager is the person who is responsible for overseeing and coordinating the BCP activities and processes, such as the business impact analysis, the recovery strategies, the BCP document, the testing, training, and exercises, and the maintenance and review. The BC manager can validate the BCP by reviewing and verifying the BCP components and outcomes, and ensuring that they meet the BCP standards and objectives. However, the validation by the BC manager is not enough to consider the BCP to be valid, as it does not test or demonstrate the BCP in a realistic scenario. When it has been validated by the board of directors is not a criterion for considering a BCP to be valid, but rather a party that is involved in approving a BCP. The board of directors is the group of people who are elected by the shareholders to represent their interests and to oversee the strategic direction and governance of the organization. The board of directors can approve the BCP by endorsing and supporting the BCP components and outcomes, and allocating the necessary resources and funds for the BCP. However, the approval by the board of directors is not enough to consider the BCP to be valid, as it does not test or demonstrate the BCP in a realistic scenario. When it has been validated by all threat scenarios is not a criterion for considering a BCP to be valid, but rather an unrealistic or impossible expectation for validating a BCP. A threat scenario is a description or a simulation of a possible or potential disruption or disaster that might affect the organization’s critical business functions and processes, such as a natural hazard, a human error, or a technical failure. A threat scenario can be used to test and validate the BCP by measuring and evaluating the BCP’s performance and effectiveness in responding and recovering from the disruption or disaster. However, it is not possible or feasible to validate the BCP by all threat scenarios, as there are too many or unknown threat scenarios that might occur, and some threat scenarios might be too severe or complex to simulate or test. Therefore, the BCP should be validated by the most likely or relevant threat scenarios, and not by all threat scenarios.
An organization is found lacking the ability to properly establish performance indicators for its Web hosting solution during an audit. What would be the MOST probable cause?
Absence of a Business Intelligence (BI) solution
Inadequate cost modeling
Improper deployment of the Service-Oriented Architecture (SOA)
Insufficient Service Level Agreement (SLA)
Insufficient Service Level Agreement (SLA) would be the most probable cause for an organization to lack the ability to properly establish performance indicators for its Web hosting solution during an audit. A Web hosting solution is a service that provides the infrastructure, resources, and tools for hosting and maintaining a website or a web application on the internet. A Web hosting solution can offer various benefits, such as:
A Service Level Agreement (SLA) is a contract or an agreement that defines the expectations, responsibilities, and obligations of the parties involved in a service, such as the service provider and the service consumer. An SLA can include various components, such as:
Insufficient SLA would be the most probable cause for an organization to lack the ability to properly establish performance indicators for its Web hosting solution during an audit, because it could mean that the SLA does not include or specify the appropriate service level indicators or objectives for the Web hosting solution, or that the SLA does not provide or enforce the adequate service level reporting or penalties for the Web hosting solution. This could affect the ability of the organization to measure and assess the Web hosting solution quality, performance, and availability, and to identify and address any issues or risks in the Web hosting solution.
The other options are not the most probable causes for an organization to lack the ability to properly establish performance indicators for its Web hosting solution during an audit, but rather the factors that could affect or improve the Web hosting solution in other ways. Absence of a Business Intelligence (BI) solution is a factor that could affect the ability of the organization to analyze and utilize the data and information from the Web hosting solution, such as the web traffic, behavior, or conversion. A BI solution is a system that involves the collection, integration, processing, and presentation of the data and information from various sources, such as the Web hosting solution, to support the decision making and planning of the organization. However, absence of a BI solution is not the most probable cause for an organization to lack the ability to properly establish performance indicators for its Web hosting solution during an audit, because it does not affect the definition or specification of the performance indicators for the Web hosting solution, but rather the analysis or usage of the performance indicators for the Web hosting solution. Inadequate cost modeling is a factor that could affect the ability of the organization to estimate and optimize the cost and value of the Web hosting solution, such as the web hosting fees, maintenance costs, or return on investment. A cost model is a tool or a method that helps the organization to calculate and compare the cost and value of the Web hosting solution, and to identify and implement the best or most efficient Web hosting solution. However, inadequate cost modeling is not the most probable cause for an organization to lack the ability to properly establish performance indicators for its Web hosting solution during an audit, because it does not affect the definition or specification of the performance indicators for the Web hosting solution, but rather the estimation or optimization of the cost and value of the Web hosting solution. Improper deployment of the Service-Oriented Architecture (SOA) is a factor that could affect the ability of the organization to design and develop the Web hosting solution, such as the web services, components, or interfaces. A SOA is a software architecture that involves the modularization, standardization, and integration of the software components or services that provide the functionality or logic of the Web hosting solution. A SOA can offer various benefits, such as:
However, improper deployment of the SOA is not the most probable cause for an organization to lack the ability to properly establish performance indicators for its Web hosting solution during an audit, because it does not affect the definition or specification of the performance indicators for the Web hosting solution, but rather the design or development of the Web hosting solution.
Which of the following types of business continuity tests includes assessment of resilience to internal and external risks without endangering live operations?
Walkthrough
Simulation
Parallel
White box
Simulation is the type of business continuity test that includes assessment of resilience to internal and external risks without endangering live operations. Business continuity is the ability of an organization to maintain or resume its critical functions and operations in the event of a disruption or disaster. Business continuity testing is the process of evaluating and validating the effectiveness and readiness of the business continuity plan (BCP) and the disaster recovery plan (DRP) through various methods and scenarios. Business continuity testing can provide several benefits, such as:
There are different types of business continuity tests, depending on the scope, purpose, and complexity of the test. Some of the common types are:
Simulation is the type of business continuity test that includes assessment of resilience to internal and external risks without endangering live operations, because it can simulate various types of risks, such as natural, human, or technical, and assess how the organization and its systems can cope and recover from them, without actually causing any harm or disruption to the live operations. Simulation can also help to identify and mitigate any potential risks that might affect the live operations, and to improve the resilience and preparedness of the organization and its systems.
The other options are not the types of business continuity tests that include assessment of resilience to internal and external risks without endangering live operations, but rather types that have other objectives or effects. Walkthrough is a type of business continuity test that does not include assessment of resilience to internal and external risks, but rather a review and discussion of the BCP and DRP, without any actual testing or practice. Parallel is a type of business continuity test that does not endanger live operations, but rather maintains them, while activating and operating the alternate site or system. Full interruption is a type of business continuity test that does endanger live operations, by shutting them down and transferring them to the alternate site or system.
What should be the FIRST action to protect the chain of evidence when a desktop computer is involved?
Take the computer to a forensic lab
Make a copy of the hard drive
Start documenting
Turn off the computer
Making a copy of the hard drive should be the first action to protect the chain of evidence when a desktop computer is involved. A chain of evidence, also known as a chain of custody, is a process that documents and preserves the integrity and authenticity of the evidence collected from a crime scene, such as a desktop computer. A chain of evidence should include information such as:
Making a copy of the hard drive should be the first action to protect the chain of evidence when a desktop computer is involved, because it can ensure that the original hard drive is not altered, damaged, or destroyed during the forensic analysis, and that the copy can be used as a reliable and admissible source of evidence. Making a copy of the hard drive should also involve using a write blocker, which is a device or a software that prevents any modification or deletion of the data on the hard drive, and generating a hash value, which is a unique and fixed identifier that can verify the integrity and consistency of the data on the hard drive.
The other options are not the first actions to protect the chain of evidence when a desktop computer is involved, but rather actions that should be done after or along with making a copy of the hard drive. Taking the computer to a forensic lab is an action that should be done after making a copy of the hard drive, because it can ensure that the computer is transported and stored in a secure and controlled environment, and that the forensic analysis is conducted by qualified and authorized personnel. Starting documenting is an action that should be done along with making a copy of the hard drive, because it can ensure that the chain of evidence is maintained and recorded throughout the forensic process, and that the evidence can be traced and verified. Turning off the computer is an action that should be done after making a copy of the hard drive, because it can ensure that the computer is powered down and disconnected from any network or device, and that the computer is protected from any further damage or tampering.
Which of the following is a PRIMARY advantage of using a third-party identity service?
Consolidation of multiple providers
Directory synchronization
Web based logon
Automated account management
Consolidation of multiple providers is the primary advantage of using a third-party identity service. A third-party identity service is a service that provides identity and access management (IAM) functions, such as authentication, authorization, and federation, for multiple applications or systems, using a single identity provider (IdP). A third-party identity service can offer various benefits, such as:
Consolidation of multiple providers is the primary advantage of using a third-party identity service, because it can simplify and streamline the IAM architecture and processes, by reducing the number of IdPs and IAM systems that are involved in managing the identities and access for multiple applications or systems. Consolidation of multiple providers can also help to avoid the issues or risks that might arise from having multiple IdPs and IAM systems, such as the inconsistency, redundancy, or conflict of the IAM policies and controls, or the inefficiency, vulnerability, or disruption of the IAM functions.
The other options are not the primary advantages of using a third-party identity service, but rather secondary or specific advantages for different aspects or scenarios of using a third-party identity service. Directory synchronization is an advantage of using a third-party identity service, but it is more relevant for the scenario where the organization has an existing directory service, such as LDAP or Active Directory, that stores and manages the user accounts and attributes, and wants to synchronize them with the third-party identity service, to enable the SSO or federation for the users. Web based logon is an advantage of using a third-party identity service, but it is more relevant for the aspect where the third-party identity service uses a web-based protocol, such as SAML or OAuth, to facilitate the SSO or federation for the users, by redirecting them to a web-based logon page, where they can enter their credentials or consent. Automated account management is an advantage of using a third-party identity service, but it is more relevant for the aspect where the third-party identity service provides the IAM functions, such as provisioning, deprovisioning, or updating, for the user accounts and access rights, using an automated or self-service mechanism, such as SCIM or JIT.
What is the PRIMARY reason for implementing change management?
Certify and approve releases to the environment
Provide version rollbacks for system changes
Ensure that all applications are approved
Ensure accountability for changes to the environment
Ensuring accountability for changes to the environment is the primary reason for implementing change management. Change management is a process that ensures that any changes to the system or network environment, such as the hardware, software, configuration, or documentation, are planned, approved, implemented, and documented in a controlled and consistent manner. Change management can provide several benefits, such as:
Ensuring accountability for changes to the environment is the primary reason for implementing change management, because it can ensure that the changes are authorized, justified, and traceable, and that the parties involved in the changes are responsible and accountable for their actions and results. Accountability can also help to deter or detect any unauthorized or malicious changes that might compromise the system or network environment.
The other options are not the primary reasons for implementing change management, but rather secondary or specific reasons for different aspects or phases of change management. Certifying and approving releases to the environment is a reason for implementing change management, but it is more relevant for the approval phase of change management, which is the phase that involves reviewing and validating the changes and their impacts, and granting or denying the permission to proceed with the changes. Providing version rollbacks for system changes is a reason for implementing change management, but it is more relevant for the implementation phase of change management, which is the phase that involves executing and monitoring the changes and their effects, and providing the backup and recovery options for the changes. Ensuring that all applications are approved is a reason for implementing change management, but it is more relevant for the application changes, which are the changes that affect the software components or services that provide the functionality or logic of the system or network environment.
Which of the following is the FIRST step in the incident response process?
Determine the cause of the incident
Disconnect the system involved from the network
Isolate and contain the system involved
Investigate all symptoms to confirm the incident
Investigating all symptoms to confirm the incident is the first step in the incident response process. An incident is an event that violates or threatens the security, availability, integrity, or confidentiality of the IT systems or data. An incident response is a process that involves detecting, analyzing, containing, eradicating, recovering, and learning from an incident, using various methods and tools. An incident response can provide several benefits, such as:
Investigating all symptoms to confirm the incident is the first step in the incident response process, because it can ensure that the incident is verified and validated, and that the incident response is initiated and escalated. A symptom is a sign or an indication that an incident may have occurred or is occurring, such as an alert, a log, or a report. Investigating all symptoms to confirm the incident involves collecting and analyzing the relevant data and information from various sources, such as the IT systems, the network, the users, or the external parties, and determining whether an incident has actually happened or is happening, and how serious or urgent it is. Investigating all symptoms to confirm the incident can also help to:
The other options are not the first steps in the incident response process, but rather steps that should be done after or along with investigating all symptoms to confirm the incident. Determining the cause of the incident is a step that should be done after investigating all symptoms to confirm the incident, because it can ensure that the root cause and source of the incident are identified and analyzed, and that the incident response is directed and focused. Determining the cause of the incident involves examining and testing the affected IT systems and data, and tracing and tracking the origin and path of the incident, using various techniques and tools, such as forensics, malware analysis, or reverse engineering. Determining the cause of the incident can also help to:
Disconnecting the system involved from the network is a step that should be done along with investigating all symptoms to confirm the incident, because it can ensure that the system is isolated and protected from any external or internal influences or interferences, and that the incident response is conducted in a safe and controlled environment. Disconnecting the system involved from the network can also help to:
Isolating and containing the system involved is a step that should be done after investigating all symptoms to confirm the incident, because it can ensure that the incident is confined and restricted, and that the incident response is continued and maintained. Isolating and containing the system involved involves applying and enforcing the appropriate security measures and controls to limit or stop the activity and impact of the incident on the IT systems and data, such as firewall rules, access policies, or encryption keys. Isolating and containing the system involved can also help to:
A Business Continuity Plan/Disaster Recovery Plan (BCP/DRP) will provide which of the following?
Guaranteed recovery of all business functions
Minimization of the need decision making during a crisis
Insurance against litigation following a disaster
Protection from loss of organization resources
Minimization of the need for decision making during a crisis is the main benefit that a Business Continuity Plan/Disaster Recovery Plan (BCP/DRP) will provide. A BCP/DRP is a set of policies, procedures, and resources that enable an organization to continue or resume its critical functions and operations in the event of a disruption or disaster. A BCP/DRP can provide several benefits, such as:
Minimization of the need for decision making during a crisis is the main benefit that a BCP/DRP will provide, because it can ensure that the organization and its staff have a clear and consistent guidance and direction on how to respond and act during a disruption or disaster, and avoid any confusion, uncertainty, or inconsistency that might worsen the situation or impact. A BCP/DRP can also help to reduce the stress and pressure on the organization and its staff during a crisis, and increase their confidence and competence in executing the plans.
The other options are not the benefits that a BCP/DRP will provide, but rather unrealistic or incorrect expectations or outcomes of a BCP/DRP. Guaranteed recovery of all business functions is not a benefit that a BCP/DRP will provide, because it is not possible or feasible to recover all business functions after a disruption or disaster, especially if the disruption or disaster is severe or prolonged. A BCP/DRP can only prioritize and recover the most critical or essential business functions, and may have to suspend or terminate the less critical or non-essential business functions. Insurance against litigation following a disaster is not a benefit that a BCP/DRP will provide, because it is not a guarantee or protection that the organization will not face any legal or regulatory consequences or liabilities after a disruption or disaster, especially if the disruption or disaster is caused by the organization’s negligence or misconduct. A BCP/DRP can only help to mitigate or reduce the legal or regulatory risks, and may have to comply with or report to the relevant authorities or parties. Protection from loss of organization resources is not a benefit that a BCP/DRP will provide, because it is not a prevention or avoidance of any damage or destruction of the organization’s assets or resources during a disruption or disaster, especially if the disruption or disaster is physical or natural. A BCP/DRP can only help to restore or replace the lost or damaged assets or resources, and may have to incur some costs or losses.
With what frequency should monitoring of a control occur when implementing Information Security Continuous Monitoring (ISCM) solutions?
Continuously without exception for all security controls
Before and after each change of the control
At a rate concurrent with the volatility of the security control
Only during system implementation and decommissioning
Monitoring of a control should occur at a rate concurrent with the volatility of the security control when implementing Information Security Continuous Monitoring (ISCM) solutions. ISCM is a process that involves maintaining the ongoing awareness of the security status, events, and activities of a system or network, by collecting, analyzing, and reporting the security data and information, using various methods and tools. ISCM can provide several benefits, such as:
A security control is a measure or mechanism that is implemented to protect the system or network from the security threats or risks, by preventing, detecting, or correcting the security incidents or impacts. A security control can have various types, such as administrative, technical, or physical, and various attributes, such as preventive, detective, or corrective. A security control can also have different levels of volatility, which is the degree or frequency of change or variation of the security control, due to various factors, such as the security requirements, the threat landscape, or the system or network environment.
Monitoring of a control should occur at a rate concurrent with the volatility of the security control when implementing ISCM solutions, because it can ensure that the ISCM solutions can capture and reflect the current and accurate state and performance of the security control, and can identify and report any issues or risks that might affect the security control. Monitoring of a control at a rate concurrent with the volatility of the security control can also help to optimize the ISCM resources and efforts, by allocating them according to the priority and urgency of the security control.
The other options are not the correct frequencies for monitoring of a control when implementing ISCM solutions, but rather incorrect or unrealistic frequencies that might cause problems or inefficiencies for the ISCM solutions. Continuously without exception for all security controls is an incorrect frequency for monitoring of a control when implementing ISCM solutions, because it is not feasible or necessary to monitor all security controls at the same and constant rate, regardless of their volatility or importance. Continuously monitoring all security controls without exception might cause the ISCM solutions to consume excessive or wasteful resources and efforts, and might overwhelm or overload the ISCM solutions with too much or irrelevant data and information. Before and after each change of the control is an incorrect frequency for monitoring of a control when implementing ISCM solutions, because it is not sufficient or timely to monitor the security control only when there is a change of the security control, and not during the normal operation of the security control. Monitoring the security control only before and after each change might cause the ISCM solutions to miss or ignore the security status, events, and activities that occur between the changes of the security control, and might delay or hinder the ISCM solutions from detecting and responding to the security issues or incidents that affect the security control. Only during system implementation and decommissioning is an incorrect frequency for monitoring of a control when implementing ISCM solutions, because it is not appropriate or effective to monitor the security control only during the initial or final stages of the system or network lifecycle, and not during the operational or maintenance stages of the system or network lifecycle. Monitoring the security control only during system implementation and decommissioning might cause the ISCM solutions to neglect or overlook the security status, events, and activities that occur during the regular or ongoing operation of the system or network, and might prevent or limit the ISCM solutions from improving and optimizing the security control.
The configuration management and control task of the certification and accreditation process is incorporated in which phase of the System Development Life Cycle (SDLC)?
System acquisition and development
System operations and maintenance
System initiation
System implementation
The configuration management and control task of the certification and accreditation process is incorporated in the system acquisition and development phase of the System Development Life Cycle (SDLC). The SDLC is a process that involves planning, designing, developing, testing, deploying, operating, and maintaining a system, using various models and methodologies, such as waterfall, spiral, agile, or DevSecOps. The SDLC can be divided into several phases, each with its own objectives and activities, such as:
The certification and accreditation process is a process that involves assessing and verifying the security and compliance of a system, and authorizing and approving the system operation and maintenance, using various standards and frameworks, such as NIST SP 800-37 or ISO/IEC 27001. The certification and accreditation process can be divided into several tasks, each with its own objectives and activities, such as:
The configuration management and control task of the certification and accreditation process is incorporated in the system acquisition and development phase of the SDLC, because it can ensure that the system design and development are consistent and compliant with the security objectives and requirements, and that the system changes are controlled and documented. Configuration management and control is a process that involves establishing and maintaining the baseline and the inventory of the system components and resources, such as hardware, software, data, or documentation, and tracking and recording any modifications or updates to the system components and resources, using various techniques and tools, such as version control, change control, or configuration audits. Configuration management and control can provide several benefits, such as:
The other options are not the phases of the SDLC that incorporate the configuration management and control task of the certification and accreditation process, but rather phases that involve other tasks of the certification and accreditation process. System operations and maintenance is a phase of the SDLC that incorporates the security monitoring task of the certification and accreditation process, because it can ensure that the system operation and maintenance are consistent and compliant with the security objectives and requirements, and that the system security is updated and improved. System initiation is a phase of the SDLC that incorporates the security categorization and security planning tasks of the certification and accreditation process, because it can ensure that the system scope and objectives are defined and aligned with the security objectives and requirements, and that the security plan and policy are developed and documented. System implementation is a phase of the SDLC that incorporates the security assessment and security authorization tasks of the certification and accreditation process, because it can ensure that the system deployment and installation are evaluated and verified for the security effectiveness and compliance, and that the system operation and maintenance are authorized and approved based on the risk and impact analysis and the security objectives and requirements.
Which of the following is the PRIMARY risk with using open source software in a commercial software construction?
Lack of software documentation
License agreements requiring release of modified code
Expiration of the license agreement
Costs associated with support of the software
The primary risk with using open source software in a commercial software construction is license agreements requiring release of modified code. Open source software is software that uses publicly available source code, which can be seen, modified, and distributed by anyone. Open source software has some advantages, such as being affordable and flexible, but it also has some disadvantages, such as being potentially insecure or unsupported.
One of the main disadvantages of using open source software in a commercial software construction is the license agreements that govern the use and distribution of the open source software. License agreements are legal contracts that specify the rights and obligations of the parties involved in the software, such as the original authors, the developers, and the users. License agreements can vary in terms of their terms and conditions, such as the scope, the duration, or the fees of the software.
Some of the common types of license agreements for open source software are:
The primary risk with using open source software in a commercial software construction is license agreements requiring release of modified code, which are usually associated with copyleft licenses. This means that if a commercial software construction uses or incorporates open source software that is licensed under a copyleft license, then it must also release its own source code and any modifications or derivatives of it, under the same or compatible copyleft license. This can pose a significant risk for the commercial software construction, as it may lose its competitive advantage, intellectual property, or revenue, by disclosing its source code and allowing others to use, modify, or distribute it.
The other options are not the primary risks with using open source software in a commercial software construction, but rather secondary or minor risks that may or may not apply to the open source software. Lack of software documentation is a secondary risk with using open source software in a commercial software construction, as it may affect the quality, usability, or maintainability of the open source software, but it does not necessarily affect the rights or obligations of the commercial software construction. Expiration of the license agreement is a minor risk with using open source software in a commercial software construction, as it may affect the availability or continuity of the open source software, but it is unlikely to happen, as most open source software licenses are perpetual or indefinite. Costs associated with support of the software is a secondary risk with using open source software in a commercial software construction, as it may affect the reliability, security, or performance of the open source software, but it can be mitigated or avoided by choosing the open source software that has adequate or alternative support options.
A Java program is being developed to read a file from computer A and write it to computer B, using a third computer C. The program is not working as expected. What is the MOST probable security feature of Java preventing the program from operating as intended?
Least privilege
Privilege escalation
Defense in depth
Privilege bracketing
The most probable security feature of Java preventing the program from operating as intended is least privilege. Least privilege is a principle that states that a subject (such as a user, a process, or a program) should only have the minimum amount of access or permissions that are necessary to perform its function or task. Least privilege can help to reduce the attack surface and the potential damage of a system or network, by limiting the exposure and impact of a subject in case of a compromise or misuse.
Java implements the principle of least privilege through its security model, which consists of several components, such as:
In this question, the Java program is being developed to read a file from computer A and write it to computer B, using a third computer C. This means that the Java program needs to have the permissions to perform the file I/O and the network communication operations, which are considered as sensitive or risky actions by the Java security model. However, if the Java program is running on computer C with the default or the minimal security permissions, such as in the Java Security Sandbox, then it will not be able to perform these operations, and the program will not work as expected. Therefore, the most probable security feature of Java preventing the program from operating as intended is least privilege, which limits the access or permissions of the Java program based on its source, signer, or policy.
The other options are not the security features of Java preventing the program from operating as intended, but rather concepts or techniques that are related to security in general or in other contexts. Privilege escalation is a technique that allows a subject to gain higher or unauthorized access or permissions than what it is supposed to have, by exploiting a vulnerability or a flaw in a system or network. Privilege escalation can help an attacker to perform malicious actions or to access sensitive resources or data, by bypassing the security controls or restrictions. Defense in depth is a concept that states that a system or network should have multiple layers or levels of security, to provide redundancy and resilience in case of a breach or an attack. Defense in depth can help to protect a system or network from various threats and risks, by using different types of security measures and controls, such as the physical, the technical, or the administrative ones. Privilege bracketing is a technique that allows a subject to temporarily elevate or lower its access or permissions, to perform a specific function or task, and then return to its original or normal level. Privilege bracketing can help to reduce the exposure and impact of a subject, by minimizing the time and scope of its higher or lower access or permissions.
What is the BEST approach to addressing security issues in legacy web applications?
Debug the security issues
Migrate to newer, supported applications where possible
Conduct a security assessment
Protect the legacy application with a web application firewall
Migrating to newer, supported applications where possible is the best approach to addressing security issues in legacy web applications. Legacy web applications are web applications that are outdated, unsupported, or incompatible with the current technologies and standards. Legacy web applications may have various security issues, such as:
Migrating to newer, supported applications where possible is the best approach to addressing security issues in legacy web applications, because it can provide several benefits, such as:
The other options are not the best approaches to addressing security issues in legacy web applications, but rather approaches that can mitigate or remediate the security issues, but not eliminate or prevent them. Debugging the security issues is an approach that can mitigate the security issues in legacy web applications, but not the best approach, because it involves identifying and fixing the errors or defects in the code or logic of the web applications, which may be difficult or impossible to do for the legacy web applications that are outdated or unsupported. Conducting a security assessment is an approach that can remediate the security issues in legacy web applications, but not the best approach, because it involves evaluating and testing the security effectiveness and compliance of the web applications, using various techniques and tools, such as audits, reviews, scans, or penetration tests, and identifying and reporting any security weaknesses or gaps, which may not be sufficient or feasible to do for the legacy web applications that are incompatible or obsolete. Protecting the legacy application with a web application firewall is an approach that can mitigate the security issues in legacy web applications, but not the best approach, because it involves deploying and configuring a web application firewall, which is a security device or software that monitors and filters the web traffic between the web applications and the users or clients, and blocks or allows the web requests or responses based on the predefined rules or policies, which may not be effective or efficient to do for the legacy web applications that have weak or outdated encryption or authentication mechanisms.
Which of the following is a web application control that should be put into place to prevent exploitation of Operating System (OS) bugs?
Check arguments in function calls
Test for the security patch level of the environment
Include logging functions
Digitally sign each application module
Testing for the security patch level of the environment is the web application control that should be put into place to prevent exploitation of Operating System (OS) bugs. OS bugs are errors or defects in the code or logic of the OS that can cause the OS to malfunction or behave unexpectedly. OS bugs can be exploited by attackers to gain unauthorized access, disrupt business operations, or steal or leak sensitive data. Testing for the security patch level of the environment is the web application control that should be put into place to prevent exploitation of OS bugs, because it can provide several benefits, such as:
The other options are not the web application controls that should be put into place to prevent exploitation of OS bugs, but rather web application controls that can prevent or mitigate other types of web application attacks or issues. Checking arguments in function calls is a web application control that can prevent or mitigate buffer overflow attacks, which are attacks that exploit the vulnerability of the web application code that does not properly check the size or length of the input data that is passed to a function or a variable, and overwrite the adjacent memory locations with malicious code or data. Including logging functions is a web application control that can prevent or mitigate unauthorized access or modification attacks, which are attacks that exploit the lack of or weak authentication or authorization mechanisms of the web applications, and access or modify the web application data or functionality without proper permission or verification. Digitally signing each application module is a web application control that can prevent or mitigate code injection or tampering attacks, which are attacks that exploit the vulnerability of the web application code that does not properly validate or sanitize the input data that is executed or interpreted by the web application, and inject or modify the web application code with malicious code or data.
Which of the following is the BEST method to prevent malware from being introduced into a production environment?
Purchase software from a limited list of retailers
Verify the hash key or certificate key of all updates
Do not permit programs, patches, or updates from the Internet
Test all new software in a segregated environment
Testing all new software in a segregated environment is the best method to prevent malware from being introduced into a production environment. Malware is any malicious software that can harm or compromise the security, availability, integrity, or confidentiality of a system or data. Malware can be introduced into a production environment through various sources, such as software downloads, updates, patches, or installations. Testing all new software in a segregated environment involves verifying and validating the functionality and security of the software before deploying it to the production environment, using a separate system or network that is isolated and protected from the production environment. Testing all new software in a segregated environment can provide several benefits, such as:
The other options are not the best methods to prevent malware from being introduced into a production environment, but rather methods that can reduce or mitigate the risk of malware, but not eliminate it. Purchasing software from a limited list of retailers is a method that can reduce the risk of malware from being introduced into a production environment, but not prevent it. This method involves obtaining software only from trusted and reputable sources, such as official vendors or distributors, that can provide some assurance of the quality and security of the software. However, this method does not guarantee that the software is free of malware, as it may still contain hidden or embedded malware, or it may be tampered with or compromised during the delivery or installation process. Verifying the hash key or certificate key of all updates is a method that can reduce the risk of malware from being introduced into a production environment, but not prevent it. This method involves checking the authenticity and integrity of the software updates, patches, or installations, by comparing the hash key or certificate key of the software with the expected or published value, using cryptographic techniques and tools. However, this method does not guarantee that the software is free of malware, as it may still contain malware that is not detected or altered by the hash key or certificate key, or it may be subject to a man-in-the-middle attack or a replay attack that can intercept or modify the software or the key. Not permitting programs, patches, or updates from the Internet is a method that can reduce the risk of malware from being introduced into a production environment, but not prevent it. This method involves restricting or blocking the access or download of software from the Internet, which is a common and convenient source of malware, by applying and enforcing the appropriate security policies and controls, such as firewall rules, antivirus software, or web filters. However, this method does not guarantee that the software is free of malware, as it may still be obtained or infected from other sources, such as removable media, email attachments, or network shares.
When in the Software Development Life Cycle (SDLC) MUST software security functional requirements be defined?
After the system preliminary design has been developed and the data security categorization has been performed
After the vulnerability analysis has been performed and before the system detailed design begins
After the system preliminary design has been developed and before the data security categorization begins
After the business functional analysis and the data security categorization have been performed
Software security functional requirements must be defined after the business functional analysis and the data security categorization have been performed in the Software Development Life Cycle (SDLC). The SDLC is a process that involves planning, designing, developing, testing, deploying, operating, and maintaining a system, using various models and methodologies, such as waterfall, spiral, agile, or DevSecOps. The SDLC can be divided into several phases, each with its own objectives and activities, such as:
Software security functional requirements are the specific and measurable security features and capabilities that the system must provide to meet the security objectives and requirements. Software security functional requirements are derived from the business functional analysis and the data security categorization, which are two tasks that are performed in the system initiation phase of the SDLC. The business functional analysis is the process of identifying and documenting the business functions and processes that the system must support and enable, such as the inputs, outputs, workflows, and tasks. The data security categorization is the process of determining the security level and impact of the system and its data, based on the confidentiality, integrity, and availability criteria, and applying the appropriate security controls and measures. Software security functional requirements must be defined after the business functional analysis and the data security categorization have been performed, because they can ensure that the system design and development are consistent and compliant with the security objectives and requirements, and that the system security is aligned and integrated with the business functions and processes.
The other options are not the phases of the SDLC when the software security functional requirements must be defined, but rather phases that involve other tasks or activities related to the system design and development. After the system preliminary design has been developed and the data security categorization has been performed is not the phase when the software security functional requirements must be defined, but rather the phase when the system architecture and components are designed, based on the system scope and objectives, and the data security categorization is verified and validated. After the vulnerability analysis has been performed and before the system detailed design begins is not the phase when the software security functional requirements must be defined, but rather the phase when the system design and components are evaluated and tested for the security effectiveness and compliance, and the system detailed design is developed, based on the system architecture and components. After the system preliminary design has been developed and before the data security categorization begins is not the phase when the software security functional requirements must be defined, but rather the phase when the system architecture and components are designed, based on the system scope and objectives, and the data security categorization is initiated and planned.
Which of the following BEST describes an access control method utilizing cryptographic keys derived from a smart card private key that is embedded within mobile devices?
Derived credential
Temporary security credential
Mobile device credentialing service
Digest authentication
Derived credential is the best description of an access control method utilizing cryptographic keys derived from a smart card private key that is embedded within mobile devices. A smart card is a device that contains a microchip that stores a private key and a digital certificate that are used for authentication and encryption. A smart card is typically inserted into a reader that is attached to a computer or a terminal, and the user enters a personal identification number (PIN) to unlock the smart card and access the private key and the certificate. A smart card can provide a high level of security and convenience for the user, as it implements a two-factor authentication method that combines something the user has (the smart card) and something the user knows (the PIN).
However, a smart card may not be compatible or convenient for mobile devices, such as smartphones or tablets, that do not have a smart card reader or a USB port. To address this issue, a derived credential is a solution that allows the user to use a mobile device as an alternative to a smart card for authentication and encryption. A derived credential is a cryptographic key and a certificate that are derived from the smart card private key and certificate, and that are stored on the mobile device. A derived credential works as follows:
A derived credential can provide a secure and convenient way to use a mobile device as an alternative to a smart card for authentication and encryption, as it implements a two-factor authentication method that combines something the user has (the mobile device) and something the user is (the biometric feature). A derived credential can also comply with the standards and policies for the use of smart cards, such as the Personal Identity Verification (PIV) or the Common Access Card (CAC) programs.
The other options are not the best descriptions of an access control method utilizing cryptographic keys derived from a smart card private key that is embedded within mobile devices, but rather descriptions of other methods or concepts. Temporary security credential is a method that involves issuing a short-lived credential, such as a token or a password, that can be used for a limited time or a specific purpose. Temporary security credential can provide a flexible and dynamic way to grant access to the users or entities, but it does not involve deriving a cryptographic key from a smart card private key. Mobile device credentialing service is a concept that involves providing a service that can issue, manage, or revoke credentials for mobile devices, such as certificates, tokens, or passwords. Mobile device credentialing service can provide a centralized and standardized way to control the access of mobile devices, but it does not involve deriving a cryptographic key from a smart card private key. Digest authentication is a method that involves using a hash function, such as MD5, to generate a digest or a fingerprint of the user’s credentials, such as the username and password, and sending it to the server for verification. Digest authentication can provide a more secure way to authenticate the user than the basic authentication, which sends the credentials in plain text, but it does not involve deriving a cryptographic key from a smart card private key.
Users require access rights that allow them to view the average salary of groups of employees. Which control would prevent the users from obtaining an individual employee’s salary?
Limit access to predefined queries
Segregate the database into a small number of partitions each with a separate security level
Implement Role Based Access Control (RBAC)
Reduce the number of people who have access to the system for statistical purposes
Limiting access to predefined queries is the control that would prevent the users from obtaining an individual employee’s salary, if they only require access rights that allow them to view the average salary of groups of employees. A query is a request for information from a database, which can be expressed in a structured query language (SQL) or a graphical user interface (GUI). A query can specify the criteria, conditions, and operations for selecting, filtering, sorting, grouping, and aggregating the data from the database. A predefined query is a query that has been created and stored in advance by the database administrator or the data owner, and that can be executed by the authorized users without any modification. A predefined query can provide several benefits, such as:
Limiting access to predefined queries is the control that would prevent the users from obtaining an individual employee’s salary, if they only require access rights that allow them to view the average salary of groups of employees, because it can ensure that the users can only access the data that is relevant and necessary for their tasks, and that they cannot access or manipulate the data that is beyond their scope or authority. For example, a predefined query can be created and stored that calculates and displays the average salary of groups of employees based on certain criteria, such as department, position, or experience. The users who need to view this information can execute this predefined query, but they cannot modify it or create their own queries that might reveal the individual employee’s salary or other sensitive data.
The other options are not the controls that would prevent the users from obtaining an individual employee’s salary, if they only require access rights that allow them to view the average salary of groups of employees, but rather controls that have other purposes or effects. Segregating the database into a small number of partitions each with a separate security level is a control that would improve the performance and security of the database by dividing it into smaller and manageable segments that can be accessed and processed independently and concurrently. However, this control would not prevent the users from obtaining an individual employee’s salary, if they have access to the partition that contains the salary data, and if they can create or modify their own queries. Implementing Role Based Access Control (RBAC) is a control that would enforce the access rights and permissions of the users based on their roles or functions within the organization, rather than their identities or attributes. However, this control would not prevent the users from obtaining an individual employee’s salary, if their roles or functions require them to access the salary data, and if they can create or modify their own queries. Reducing the number of people who have access to the system for statistical purposes is a control that would reduce the risk and impact of unauthorized access or disclosure of the sensitive data by minimizing the exposure and distribution of the data. However, this control would not prevent the users from obtaining an individual employee’s salary, if they are among the people who have access to the system, and if they can create or modify their own queries.
A manufacturing organization wants to establish a Federated Identity Management (FIM) system with its 20 different supplier companies. Which of the following is the BEST solution for the manufacturing organization?
Trusted third-party certification
Lightweight Directory Access Protocol (LDAP)
Security Assertion Markup language (SAML)
Cross-certification
Security Assertion Markup Language (SAML) is the best solution for the manufacturing organization that wants to establish a Federated Identity Management (FIM) system with its 20 different supplier companies. FIM is a process that allows the sharing and recognition of identities across different organizations that have a trust relationship. FIM enables the users of one organization to access the resources or services of another organization without having to create or maintain multiple accounts or credentials. FIM can provide several benefits, such as:
SAML is a standard protocol that supports FIM by allowing the exchange of authentication and authorization information between different parties. SAML uses XML-based messages, called assertions, to convey the identity, attributes, and entitlements of a user to a service provider. SAML defines three roles for the parties involved in FIM:
SAML works as follows:
SAML is the best solution for the manufacturing organization that wants to establish a FIM system with its 20 different supplier companies, because it can enable the seamless and secure access to the resources or services across the different organizations, without requiring the users to create or maintain multiple accounts or credentials. SAML can also provide interoperability and compatibility between different platforms and technologies, as it is based on a standard and open protocol.
The other options are not the best solutions for the manufacturing organization that wants to establish a FIM system with its 20 different supplier companies, but rather solutions that have other limitations or drawbacks. Trusted third-party certification is a process that involves a third party, such as a certificate authority (CA), that issues and verifies digital certificates that contain the public key and identity information of a user or an entity. Trusted third-party certification can provide authentication and encryption for the communication between different parties, but it does not provide authorization or entitlement information for the access to the resources or services. Lightweight Directory Access Protocol (LDAP) is a protocol that allows the access and management of directory services, such as Active Directory, that store the identity and attribute information of users and entities. LDAP can provide a centralized and standardized way to store and retrieve identity and attribute information, but it does not provide a mechanism to exchange or federate the information across different organizations. Cross-certification is a process that involves two or more CAs that establish a trust relationship and recognize each other’s certificates. Cross-certification can extend the trust and validity of the certificates across different domains or organizations, but it does not provide a mechanism to exchange or federate the identity, attribute, or entitlement information.
What is the BEST approach for controlling access to highly sensitive information when employees have the same level of security clearance?
Audit logs
Role-Based Access Control (RBAC)
Two-factor authentication
Application of least privilege
Applying the principle of least privilege is the best approach for controlling access to highly sensitive information when employees have the same level of security clearance. The principle of least privilege is a security concept that states that every user or process should have the minimum amount of access rights and permissions that are necessary to perform their tasks or functions, and nothing more. The principle of least privilege can provide several benefits, such as:
Applying the principle of least privilege is the best approach for controlling access to highly sensitive information when employees have the same level of security clearance, because it can ensure that the employees can only access the information that is relevant and necessary for their tasks or functions, and that they cannot access or manipulate the information that is beyond their scope or authority. For example, if the highly sensitive information is related to a specific project or department, then only the employees who are involved in that project or department should have access to that information, and not the employees who have the same level of security clearance but are not involved in that project or department.
The other options are not the best approaches for controlling access to highly sensitive information when employees have the same level of security clearance, but rather approaches that have other purposes or effects. Audit logs are records that capture and store the information about the events and activities that occur within a system or a network, such as the access and usage of the sensitive data. Audit logs can provide a reactive and detective layer of security by enabling the monitoring and analysis of the system or network behavior, and facilitating the investigation and response of the incidents. However, audit logs cannot prevent or reduce the access or disclosure of the sensitive information, but rather provide evidence or clues after the fact. Role-Based Access Control (RBAC) is a method that enforces the access rights and permissions of the users based on their roles or functions within the organization, rather than their identities or attributes. RBAC can provide a granular and dynamic layer of security by defining and assigning the roles and permissions according to the organizational structure and policies. However, RBAC cannot control the access to highly sensitive information when employees have the same level of security clearance and the same role or function within the organization, but rather rely on other criteria or mechanisms. Two-factor authentication is a technique that verifies the identity of the users by requiring them to provide two pieces of evidence or factors, such as something they know (e.g., password, PIN), something they have (e.g., token, smart card), or something they are (e.g., fingerprint, face). Two-factor authentication can provide a strong and preventive layer of security by preventing unauthorized access to the system or network by the users who do not have both factors. However, two-factor authentication cannot control the access to highly sensitive information when employees have the same level of security clearance and the same two factors, but rather rely on other criteria or mechanisms.
Which of the following is used by the Point-to-Point Protocol (PPP) to determine packet formats?
Layer 2 Tunneling Protocol (L2TP)
Link Control Protocol (LCP)
Challenge Handshake Authentication Protocol (CHAP)
Packet Transfer Protocol (PTP)
Link Control Protocol (LCP) is used by the Point-to-Point Protocol (PPP) to determine packet formats. PPP is a data link layer protocol that provides a standard method for transporting network layer packets over point-to-point links, such as serial lines, modems, or dial-up connections. PPP supports various network layer protocols, such as IP, IPX, or AppleTalk, and it can encapsulate them in a common frame format. PPP also provides features such as authentication, compression, error detection, and multilink aggregation. LCP is a subprotocol of PPP that is responsible for establishing, configuring, maintaining, and terminating the point-to-point connection. LCP negotiates and agrees on various options and parameters for the PPP link, such as the maximum transmission unit (MTU), the authentication method, the compression method, the error detection method, and the packet format. LCP uses a series of messages, such as configure-request, configure-ack, configure-nak, configure-reject, terminate-request, terminate-ack, code-reject, protocol-reject, echo-request, echo-reply, and discard-request, to communicate and exchange information between the PPP peers.
The other options are not used by PPP to determine packet formats, but rather for other purposes. Layer 2 Tunneling Protocol (L2TP) is a tunneling protocol that allows the creation of virtual private networks (VPNs) over public networks, such as the Internet. L2TP encapsulates PPP frames in IP datagrams and sends them across the tunnel between two L2TP endpoints. L2TP does not determine the packet format of PPP, but rather uses it as a payload. Challenge Handshake Authentication Protocol (CHAP) is an authentication protocol that is used by PPP to verify the identity of the remote peer before allowing access to the network. CHAP uses a challenge-response mechanism that involves a random number (nonce) and a hash function to prevent replay attacks. CHAP does not determine the packet format of PPP, but rather uses it as a transport. Packet Transfer Protocol (PTP) is not a valid option, as there is no such protocol with this name. There is a Point-to-Point Protocol over Ethernet (PPPoE), which is a protocol that encapsulates PPP frames in Ethernet frames and allows the use of PPP over Ethernet networks. PPPoE does not determine the packet format of PPP, but rather uses it as a payload.
What is the purpose of an Internet Protocol (IP) spoofing attack?
To send excessive amounts of data to a process, making it unpredictable
To intercept network traffic without authorization
To disguise the destination address from a target’s IP filtering devices
To convince a system that it is communicating with a known entity
The purpose of an Internet Protocol (IP) spoofing attack is to convince a system that it is communicating with a known entity. IP spoofing is a technique that involves creating and sending IP packets with a forged source IP address, which is usually the IP address of a trusted or authorized host. IP spoofing can be used for various malicious purposes, such as:
The purpose of IP spoofing is to convince a system that it is communicating with a known entity, because it allows the attacker to evade detection, avoid responsibility, and exploit trust relationships.
The other options are not the main purposes of IP spoofing, but rather the possible consequences or methods of IP spoofing. To send excessive amounts of data to a process, making it unpredictable is a possible consequence of IP spoofing, as it can cause a DoS or DDoS attack. To intercept network traffic without authorization is a possible method of IP spoofing, as it can be used to hijack or intercept a TCP session. To disguise the destination address from a target’s IP filtering devices is not a valid option, as IP spoofing involves forging the source address, not the destination address.
In a Transmission Control Protocol/Internet Protocol (TCP/IP) stack, which layer is responsible for negotiating and establishing a connection with another node?
Transport layer
Application layer
Network layer
Session layer
The transport layer of the Transmission Control Protocol/Internet Protocol (TCP/IP) stack is responsible for negotiating and establishing a connection with another node. The TCP/IP stack is a simplified version of the OSI model, and it consists of four layers: application, transport, internet, and link. The transport layer is the third layer of the TCP/IP stack, and it is responsible for providing reliable and efficient end-to-end data transfer between two nodes on a network. The transport layer uses protocols, such as Transmission Control Protocol (TCP) or User Datagram Protocol (UDP), to segment, sequence, acknowledge, and reassemble the data packets, and to handle error detection and correction, flow control, and congestion control. The transport layer also provides connection-oriented or connectionless services, depending on the protocol used.
TCP is a connection-oriented protocol, which means that it establishes a logical connection between two nodes before exchanging data, and it maintains the connection until the data transfer is complete. TCP uses a three-way handshake to negotiate and establish a connection with another node. The three-way handshake works as follows:
UDP is a connectionless protocol, which means that it does not establish or maintain a connection between two nodes, but rather sends data packets independently and without any guarantee of delivery, order, or integrity. UDP does not use a handshake or any other mechanism to negotiate and establish a connection with another node, but rather relies on the application layer to handle any connection-related issues.
Which of the following factors contributes to the weakness of Wired Equivalent Privacy (WEP) protocol?
WEP uses a small range Initialization Vector (IV)
WEP uses Message Digest 5 (MD5)
WEP uses Diffie-Hellman
WEP does not use any Initialization Vector (IV)
WEP uses a small range Initialization Vector (IV) is the factor that contributes to the weakness of Wired Equivalent Privacy (WEP) protocol. WEP is a security protocol that provides encryption and authentication for wireless networks, such as Wi-Fi. WEP uses the RC4 stream cipher to encrypt the data packets, and the CRC-32 checksum to verify the data integrity. WEP also uses a shared secret key, which is concatenated with a 24-bit Initialization Vector (IV), to generate the keystream for the RC4 encryption. WEP has several weaknesses and vulnerabilities, such as:
WEP has been deprecated and replaced by more secure protocols, such as Wi-Fi Protected Access (WPA) or Wi-Fi Protected Access II (WPA2), which use stronger encryption and authentication methods, such as the Temporal Key Integrity Protocol (TKIP), the Advanced Encryption Standard (AES), or the Extensible Authentication Protocol (EAP).
The other options are not factors that contribute to the weakness of WEP, but rather factors that are irrelevant or incorrect. WEP does not use Message Digest 5 (MD5), which is a hash function that produces a 128-bit output from a variable-length input. WEP does not use Diffie-Hellman, which is a method for generating a shared secret key between two parties. WEP does use an Initialization Vector (IV), which is a 24-bit value that is concatenated with the secret key.
Which of the following operates at the Network Layer of the Open System Interconnection (OSI) model?
Packet filtering
Port services filtering
Content filtering
Application access control
Packet filtering operates at the network layer of the Open System Interconnection (OSI) model. The OSI model is a conceptual framework that describes how data is transmitted and processed across different layers of a network. The OSI model consists of seven layers: application, presentation, session, transport, network, data link, and physical. The network layer is the third layer from the bottom of the OSI model, and it is responsible for routing and forwarding data packets between different networks or subnets. The network layer uses logical addresses, such as IP addresses, to identify the source and destination of the data packets, and it uses protocols, such as IP, ICMP, or ARP, to perform the routing and forwarding functions.
Packet filtering is a technique that controls the access to a network or a host by inspecting the incoming and outgoing data packets and applying a set of rules or policies to allow or deny them. Packet filtering can be performed by devices, such as routers, firewalls, or proxies, that operate at the network layer of the OSI model. Packet filtering typically examines the network layer header of the data packets, such as the source and destination IP addresses, the protocol type, or the fragmentation flags, and compares them with the predefined rules or policies. Packet filtering can also examine the transport layer header of the data packets, such as the source and destination port numbers, the TCP flags, or the sequence numbers, and compare them with the rules or policies. Packet filtering can provide a basic level of security and performance for a network or a host, but it also has some limitations, such as the inability to inspect the payload or the content of the data packets, the vulnerability to spoofing or fragmentation attacks, or the complexity and maintenance of the rules or policies.
The other options are not techniques that operate at the network layer of the OSI model, but rather at other layers. Port services filtering is a technique that controls the access to a network or a host by inspecting the transport layer header of the data packets and applying a set of rules or policies to allow or deny them based on the port numbers or the services. Port services filtering operates at the transport layer of the OSI model, which is the fourth layer from the bottom. Content filtering is a technique that controls the access to a network or a host by inspecting the application layer payload or the content of the data packets and applying a set of rules or policies to allow or deny them based on the keywords, URLs, file types, or other criteria. Content filtering operates at the application layer of the OSI model, which is the seventh and the topmost layer. Application access control is a technique that controls the access to a network or a host by inspecting the application layer identity or the credentials of the users or the processes and applying a set of rules or policies to allow or deny them based on the roles, permissions, or other attributes. Application access control operates at the application layer of the OSI model, which is the seventh and the topmost layer.
Which of the following is the BEST network defense against unknown types of attacks or stealth attacks in progress?
Intrusion Prevention Systems (IPS)
Intrusion Detection Systems (IDS)
Stateful firewalls
Network Behavior Analysis (NBA) tools
Network Behavior Analysis (NBA) tools are the best network defense against unknown types of attacks or stealth attacks in progress. NBA tools are devices or software that monitor and analyze the network traffic and activities, and detect any anomalies or deviations from the normal or expected behavior. NBA tools use various techniques, such as statistical analysis, machine learning, artificial intelligence, or heuristics, to establish a baseline of the network behavior, and to identify any outliers or indicators of compromise. NBA tools can provide several benefits, such as:
The other options are not the best network defense against unknown types of attacks or stealth attacks in progress, but rather network defenses that have other limitations or drawbacks. Intrusion Prevention Systems (IPS) are devices or software that monitor and block the network traffic and activities that match the predefined signatures or rules of known attacks. IPS can provide a proactive and preventive layer of security, but they cannot detect or stop unknown types of attacks or stealth attacks that do not match any signatures or rules, or that can evade or disable the IPS. Intrusion Detection Systems (IDS) are devices or software that monitor and alert the network traffic and activities that match the predefined signatures or rules of known attacks. IDS can provide a reactive and detective layer of security, but they cannot detect or alert unknown types of attacks or stealth attacks that do not match any signatures or rules, or that can evade or disable the IDS. Stateful firewalls are devices or software that filter and control the network traffic and activities based on the state and context of the network sessions, such as the source and destination IP addresses, port numbers, protocol types, and sequence numbers. Stateful firewalls can provide a granular and dynamic layer of security, but they cannot filter or control unknown types of attacks or stealth attacks that use valid or spoofed network sessions, or that can exploit or bypass the firewall rules.
At what level of the Open System Interconnection (OSI) model is data at rest on a Storage Area Network (SAN) located?
Link layer
Physical layer
Session layer
Application layer
Data at rest on a Storage Area Network (SAN) is located at the physical layer of the Open System Interconnection (OSI) model. The OSI model is a conceptual framework that describes how data is transmitted and processed across different layers of a network. The OSI model consists of seven layers: application, presentation, session, transport, network, data link, and physical. The physical layer is the lowest layer of the OSI model, and it is responsible for the transmission and reception of raw bits over a physical medium, such as cables, wires, or optical fibers. The physical layer defines the physical characteristics of the medium, such as voltage, frequency, modulation, connectors, etc. The physical layer also deals with the physical topology of the network, such as bus, ring, star, mesh, etc.
A Storage Area Network (SAN) is a dedicated network that provides access to consolidated and block-level data storage. A SAN consists of storage devices, such as disks, tapes, or arrays, that are connected to servers or clients via a network infrastructure, such as switches, routers, or hubs. A SAN allows multiple servers or clients to share the same storage devices, and it provides high performance, availability, scalability, and security for data storage. Data at rest on a SAN is located at the physical layer of the OSI model, because it is stored as raw bits on the physical medium of the storage devices, and it is accessed by the servers or clients through the physical medium of the network infrastructure.
An external attacker has compromised an organization’s network security perimeter and installed a sniffer onto an inside computer. Which of the following is the MOST effective layer of security the organization could have implemented to mitigate the attacker’s ability to gain further information?
Implement packet filtering on the network firewalls
Install Host Based Intrusion Detection Systems (HIDS)
Require strong authentication for administrators
Implement logical network segmentation at the switches
Implementing logical network segmentation at the switches is the most effective layer of security the organization could have implemented to mitigate the attacker’s ability to gain further information. Logical network segmentation is the process of dividing a network into smaller subnetworks or segments based on criteria such as function, location, or security level. Logical network segmentation can be implemented at the switches, which are devices that operate at the data link layer of the OSI model and forward data packets based on the MAC addresses. Logical network segmentation can provide several benefits, such as:
Logical network segmentation can mitigate the attacker’s ability to gain further information by limiting the visibility and access of the sniffer to the segment where it is installed. A sniffer is a tool that captures and analyzes the data packets that are transmitted over a network. A sniffer can be used for legitimate purposes, such as troubleshooting, testing, or monitoring the network, or for malicious purposes, such as eavesdropping, stealing, or modifying the data. A sniffer can only capture the data packets that are within its broadcast domain, which is the set of devices that can communicate with each other without a router. By implementing logical network segmentation at the switches, the organization can create multiple broadcast domains and isolate the sensitive or critical data from the compromised segment. This way, the attacker can only see the data packets that belong to the same segment as the sniffer, and not the data packets that belong to other segments. This can prevent the attacker from gaining further information or accessing other resources on the network.
The other options are not the most effective layers of security the organization could have implemented to mitigate the attacker’s ability to gain further information, but rather layers that have other limitations or drawbacks. Implementing packet filtering on the network firewalls is not the most effective layer of security, because packet filtering only examines the network layer header of the data packets, such as the source and destination IP addresses, and does not inspect the payload or the content of the data. Packet filtering can also be bypassed by using techniques such as IP spoofing or fragmentation. Installing Host Based Intrusion Detection Systems (HIDS) is not the most effective layer of security, because HIDS only monitors and detects the activities and events on a single host, and does not prevent or respond to the attacks. HIDS can also be disabled or evaded by the attacker if the host is compromised. Requiring strong authentication for administrators is not the most effective layer of security, because authentication only verifies the identity of the users or processes, and does not protect the data in transit or at rest. Authentication can also be defeated by using techniques such as phishing, keylogging, or credential theft.
An input validation and exception handling vulnerability has been discovered on a critical web-based system. Which of the following is MOST suited to quickly implement a control?
Add a new rule to the application layer firewall
Block access to the service
Install an Intrusion Detection System (IDS)
Patch the application source code
Adding a new rule to the application layer firewall is the most suited to quickly implement a control for an input validation and exception handling vulnerability on a critical web-based system. An input validation and exception handling vulnerability is a type of vulnerability that occurs when a web-based system does not properly check, filter, or sanitize the input data that is received from the users or other sources, or does not properly handle the errors or exceptions that are generated by the system. An input validation and exception handling vulnerability can lead to various attacks, such as:
An application layer firewall is a device or software that operates at the application layer of the OSI model and inspects the application layer payload or the content of the data packets. An application layer firewall can provide various functions, such as:
Adding a new rule to the application layer firewall is the most suited to quickly implement a control for an input validation and exception handling vulnerability on a critical web-based system, because it can prevent or reduce the impact of the attacks by filtering or blocking the malicious or invalid input data that exploit the vulnerability. For example, a new rule can be added to the application layer firewall to:
Adding a new rule to the application layer firewall can be done quickly and easily, without requiring any changes or patches to the web-based system, which can be time-consuming and risky, especially for a critical system. Adding a new rule to the application layer firewall can also be done remotely and centrally, without requiring any physical access or installation on the web-based system, which can be inconvenient and costly, especially for a distributed system.
The other options are not the most suited to quickly implement a control for an input validation and exception handling vulnerability on a critical web-based system, but rather options that have other limitations or drawbacks. Blocking access to the service is not the most suited option, because it can cause disruption and unavailability of the service, which can affect the business operations and customer satisfaction, especially for a critical system. Blocking access to the service can also be a temporary and incomplete solution, as it does not address the root cause of the vulnerability or prevent the attacks from occurring again. Installing an Intrusion Detection System (IDS) is not the most suited option, because IDS only monitors and detects the attacks, and does not prevent or respond to them. IDS can also generate false positives or false negatives, which can affect the accuracy and reliability of the detection. IDS can also be overwhelmed or evaded by the attacks, which can affect the effectiveness and efficiency of the detection. Patching the application source code is not the most suited option, because it can take a long time and require a lot of resources and testing to identify, fix, and deploy the patch, especially for a complex and critical system. Patching the application source code can also introduce new errors or vulnerabilities, which can affect the functionality and security of the system. Patching the application source code can also be difficult or impossible, if the system is proprietary or legacy, which can affect the feasibility and compatibility of the patch.
Which of the following is the PRIMARY security concern associated with the implementation of smart cards?
The cards have limited memory
Vendor application compatibility
The cards can be misplaced
Mobile code can be embedded in the card
The primary security concern associated with the implementation of smart cards is that the cards can be misplaced, lost, stolen, or damaged, resulting in the compromise of the user’s identity, credentials, or data stored on the card. The other options are not the primary security concern, but rather secondary or minor issues. The cards have limited memory, which may affect the performance or functionality of the card, but not the security. Vendor application compatibility may affect the interoperability or usability of the card, but not the security. Mobile code can be embedded in the card, which may introduce malicious or unauthorized functionality, but this is a rare and sophisticated attack that requires physical access to the card and specialized equipment. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5, p. 275; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 6, p. 348.
Which of the following BEST describes a rogue Access Point (AP)?
An AP that is not protected by a firewall
An AP not configured to use Wired Equivalent Privacy (WEP) with Triple Data Encryption Algorithm (3DES)
An AP connected to the wired infrastructure but not under the management of authorized network administrators
An AP infected by any kind of Trojan or Malware
A rogue Access Point (AP) is an AP connected to the wired infrastructure but not under the management of authorized network administrators. A rogue AP can pose a serious security threat, as it can allow unauthorized access to the network, bypass security controls, and expose sensitive data. The other options are not correct descriptions of a rogue AP. Option A is a description of an unsecured AP, which is an AP that is not protected by a firewall or other security measures. Option B is a description of an outdated AP, which is an AP not configured to use Wired Equivalent Privacy (WEP) with Triple Data Encryption Algorithm (3DES), which are weak encryption methods that can be easily cracked. Option D is a description of a compromised AP, which is an AP infected by any kind of Trojan or Malware, which can cause malicious behavior or damage to the network. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6, p. 325; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 4, p. 241.
Which of the following is of GREATEST assistance to auditors when reviewing system configurations?
Change management processes
User administration procedures
Operating System (OS) baselines
System backup documentation
Operating System (OS) baselines are of greatest assistance to auditors when reviewing system configurations. OS baselines are standard or reference configurations that define the desired and secure state of an OS, including the settings, parameters, patches, and updates. OS baselines can provide several benefits, such as:
OS baselines are of greatest assistance to auditors when reviewing system configurations, because they can enable the auditors to evaluate and verify the current and actual state of the OS against the desired and secure state of the OS. OS baselines can also help the auditors to identify and report any gaps, issues, or risks in the OS configurations, and to recommend or implement any corrective or preventive actions.
The other options are not of greatest assistance to auditors when reviewing system configurations, but rather of assistance for other purposes or aspects. Change management processes are processes that ensure that any changes to the system configurations are planned, approved, implemented, and documented in a controlled and consistent manner. Change management processes can improve the security and reliability of the system configurations by preventing or reducing the errors, conflicts, or disruptions that might occur due to the changes. However, change management processes are not of greatest assistance to auditors when reviewing system configurations, because they do not define the desired and secure state of the system configurations, but rather the procedures and controls for managing the changes. User administration procedures are procedures that define the roles, responsibilities, and activities for creating, modifying, deleting, and managing the user accounts and access rights. User administration procedures can enhance the security and accountability of the user accounts and access rights by enforcing the principles of least privilege, separation of duties, and need to know. However, user administration procedures are not of greatest assistance to auditors when reviewing system configurations, because they do not define the desired and secure state of the system configurations, but rather the rules and tasks for administering the users. System backup documentation is documentation that records the information and details about the system backup processes, such as the backup frequency, type, location, retention, and recovery. System backup documentation can increase the availability and resilience of the system by ensuring that the system data and configurations can be restored in case of a loss or damage. However, system backup documentation is not of greatest assistance to auditors when reviewing system configurations, because it does not define the desired and secure state of the system configurations, but rather the backup and recovery of the system configurations.
Which of the following is a PRIMARY benefit of using a formalized security testing report format and structure?
Executive audiences will understand the outcomes of testing and most appropriate next steps for corrective actions to be taken
Technical teams will understand the testing objectives, testing strategies applied, and business risk associated with each vulnerability
Management teams will understand the testing objectives and reputational risk to the organization
Technical and management teams will better understand the testing objectives, results of each test phase, and potential impact levels
Technical and management teams will better understand the testing objectives, results of each test phase, and potential impact levels is the primary benefit of using a formalized security testing report format and structure. Security testing is a process that involves evaluating and verifying the security posture, vulnerabilities, and threats of a system or a network, using various methods and techniques, such as vulnerability assessment, penetration testing, code review, and compliance checks. Security testing can provide several benefits, such as:
A security testing report is a document that summarizes and communicates the findings and recommendations of the security testing process to the relevant stakeholders, such as the technical and management teams. A security testing report can have various formats and structures, depending on the scope, purpose, and audience of the report. However, a formalized security testing report format and structure is one that follows a standard and consistent template, such as the one proposed by the National Institute of Standards and Technology (NIST) in the Special Publication 800-115, Technical Guide to Information Security Testing and Assessment. A formalized security testing report format and structure can have several components, such as:
Technical and management teams will better understand the testing objectives, results of each test phase, and potential impact levels is the primary benefit of using a formalized security testing report format and structure, because it can ensure that the security testing report is clear, comprehensive, and consistent, and that it provides the relevant and useful information for the technical and management teams to make informed and effective decisions and actions regarding the system or network security.
The other options are not the primary benefits of using a formalized security testing report format and structure, but rather secondary or specific benefits for different audiences or purposes. Executive audiences will understand the outcomes of testing and most appropriate next steps for corrective actions to be taken is a benefit of using a formalized security testing report format and structure, but it is not the primary benefit, because it is more relevant for the executive summary component of the report, which is a brief and high-level overview of the report, rather than the entire report. Technical teams will understand the testing objectives, testing strategies applied, and business risk associated with each vulnerability is a benefit of using a formalized security testing report format and structure, but it is not the primary benefit, because it is more relevant for the methodology and results components of the report, which are more technical and detailed parts of the report, rather than the entire report. Management teams will understand the testing objectives and reputational risk to the organization is a benefit of using a formalized security testing report format and structure, but it is not the primary benefit, because it is more relevant for the introduction and conclusion components of the report, which are more contextual and strategic parts of the report, rather than the entire report.
A Virtual Machine (VM) environment has five guest Operating Systems (OS) and provides strong isolation. What MUST an administrator review to audit a user’s access to data files?
Host VM monitor audit logs
Guest OS access controls
Host VM access controls
Guest OS audit logs
Guest OS audit logs are what an administrator must review to audit a user’s access to data files in a VM environment that has five guest OS and provides strong isolation. A VM environment is a system that allows multiple virtual machines (VMs) to run on a single physical machine, each with its own OS and applications. A VM environment can provide several benefits, such as:
A guest OS is the OS that runs on a VM, which is different from the host OS that runs on the physical machine. A guest OS can have its own security controls and mechanisms, such as access controls, encryption, authentication, and audit logs. Audit logs are records that capture and store the information about the events and activities that occur within a system or a network, such as the access and usage of the data files. Audit logs can provide a reactive and detective layer of security by enabling the monitoring and analysis of the system or network behavior, and facilitating the investigation and response of the incidents.
Guest OS audit logs are what an administrator must review to audit a user’s access to data files in a VM environment that has five guest OS and provides strong isolation, because they can provide the most accurate and relevant information about the user’s actions and interactions with the data files on the VM. Guest OS audit logs can also help the administrator to identify and report any unauthorized or suspicious access or disclosure of the data files, and to recommend or implement any corrective or preventive actions.
The other options are not what an administrator must review to audit a user’s access to data files in a VM environment that has five guest OS and provides strong isolation, but rather what an administrator might review for other purposes or aspects. Host VM monitor audit logs are records that capture and store the information about the events and activities that occur on the host VM monitor, which is the software or hardware component that manages and controls the VMs on the physical machine. Host VM monitor audit logs can provide information about the performance, status, and configuration of the VMs, but they cannot provide information about the user’s access to data files on the VMs. Guest OS access controls are rules and mechanisms that regulate and restrict the access and permissions of the users and processes to the resources and services on the guest OS. Guest OS access controls can provide a proactive and preventive layer of security by enforcing the principles of least privilege, separation of duties, and need to know. However, guest OS access controls are not what an administrator must review to audit a user’s access to data files, but rather what an administrator must configure and implement to protect the data files. Host VM access controls are rules and mechanisms that regulate and restrict the access and permissions of the users and processes to the VMs on the physical machine. Host VM access controls can provide a granular and dynamic layer of security by defining and assigning the roles and permissions according to the organizational structure and policies. However, host VM access controls are not what an administrator must review to audit a user’s access to data files, but rather what an administrator must configure and implement to protect the VMs.
In which of the following programs is it MOST important to include the collection of security process data?
Quarterly access reviews
Security continuous monitoring
Business continuity testing
Annual security training
Security continuous monitoring is the program in which it is most important to include the collection of security process data. Security process data is the data that reflects the performance, effectiveness, and compliance of the security processes, such as the security policies, standards, procedures, and guidelines. Security process data can include metrics, indicators, logs, reports, and assessments. Security process data can provide several benefits, such as:
Security continuous monitoring is the program in which it is most important to include the collection of security process data, because it is the program that involves maintaining the ongoing awareness of the security status, events, and activities of the system. Security continuous monitoring can enable the system to detect and respond to any security issues or incidents in a timely and effective manner, and to adjust and improve the security controls and processes accordingly. Security continuous monitoring can also help the system to comply with the security requirements and standards from the internal or external authorities or frameworks.
The other options are not the programs in which it is most important to include the collection of security process data, but rather programs that have other objectives or scopes. Quarterly access reviews are programs that involve reviewing and verifying the user accounts and access rights on a quarterly basis. Quarterly access reviews can ensure that the user accounts and access rights are valid, authorized, and up to date, and that any inactive, expired, or unauthorized accounts or rights are removed or revoked. However, quarterly access reviews are not the programs in which it is most important to include the collection of security process data, because they are not focused on the security status, events, and activities of the system, but rather on the user accounts and access rights. Business continuity testing is a program that involves testing and validating the business continuity plan (BCP) and the disaster recovery plan (DRP) of the system. Business continuity testing can ensure that the system can continue or resume its critical functions and operations in case of a disruption or disaster, and that the system can meet the recovery objectives and requirements. However, business continuity testing is not the program in which it is most important to include the collection of security process data, because it is not focused on the security status, events, and activities of the system, but rather on the continuity and recovery of the system. Annual security training is a program that involves providing and updating the security knowledge and skills of the system users and staff on an annual basis. Annual security training can increase the security awareness and competence of the system users and staff, and reduce the human errors or risks that might compromise the system security. However, annual security training is not the program in which it is most important to include the collection of security process data, because it is not focused on the security status, events, and activities of the system, but rather on the security education and training of the system users and staff.
Which of the following could cause a Denial of Service (DoS) against an authentication system?
Encryption of audit logs
No archiving of audit logs
Hashing of audit logs
Remote access audit logs
Remote access audit logs could cause a Denial of Service (DoS) against an authentication system. A DoS attack is a type of attack that aims to disrupt or degrade the availability or performance of a system or a network by overwhelming it with excessive or malicious traffic or requests. An authentication system is a system that verifies the identity and credentials of the users or entities that want to access the system or network resources or services. An authentication system can use various methods or factors to authenticate the users or entities, such as passwords, tokens, certificates, biometrics, or behavioral patterns.
Remote access audit logs are records that capture and store the information about the events and activities that occur when the users or entities access the system or network remotely, such as via the internet, VPN, or dial-up. Remote access audit logs can provide a reactive and detective layer of security by enabling the monitoring and analysis of the remote access behavior, and facilitating the investigation and response of the incidents.
Remote access audit logs could cause a DoS against an authentication system, because they could consume a large amount of disk space, memory, or bandwidth on the authentication system, especially if the remote access is frequent, intensive, or malicious. This could affect the performance or functionality of the authentication system, and prevent or delay the legitimate users or entities from accessing the system or network resources or services. For example, an attacker could launch a DoS attack against an authentication system by sending a large number of fake or invalid remote access requests, and generating a large amount of remote access audit logs that fill up the disk space or memory of the authentication system, and cause it to crash or slow down.
The other options are not the factors that could cause a DoS against an authentication system, but rather the factors that could improve or protect the authentication system. Encryption of audit logs is a technique that involves using a cryptographic algorithm and a key to transform the audit logs into an unreadable or unintelligible format, that can only be reversed or decrypted by authorized parties. Encryption of audit logs can enhance the security and confidentiality of the audit logs by preventing unauthorized access or disclosure of the sensitive information in the audit logs. However, encryption of audit logs could not cause a DoS against an authentication system, because it does not affect the availability or performance of the authentication system, but rather the integrity or privacy of the audit logs. No archiving of audit logs is a practice that involves not storing or transferring the audit logs to a separate or external storage device or location, such as a tape, disk, or cloud. No archiving of audit logs can reduce the security and availability of the audit logs by increasing the risk of loss or damage of the audit logs, and limiting the access or retrieval of the audit logs. However, no archiving of audit logs could not cause a DoS against an authentication system, because it does not affect the availability or performance of the authentication system, but rather the availability or preservation of the audit logs. Hashing of audit logs is a technique that involves using a hash function, such as MD5 or SHA, to generate a fixed-length and unique value, called a hash or a digest, that represents the audit logs. Hashing of audit logs can improve the security and integrity of the audit logs by verifying the authenticity or consistency of the audit logs, and detecting any modification or tampering of the audit logs. However, hashing of audit logs could not cause a DoS against an authentication system, because it does not affect the availability or performance of the authentication system, but rather the integrity or verification of the audit logs.
Copyright © 2021-2024 CertsTopics. All Rights Reserved