compliance controls are associated with this Policy definition 'Azure Defender for Key Vault should be enabled' (0e6763cc-5078-4e64-889d-ff4d9a839047)
Control Domain |
Control |
Name |
MetadataId |
Category |
Title |
Owner |
Requirements |
Description |
Info |
Policy# |
Azure_Security_Benchmark_v2.0 |
IR-3 |
Azure_Security_Benchmark_v2.0_IR-3 |
Azure Security Benchmark IR-3 |
Incident Response |
Detection and analysis - create incidents based on high quality alerts |
Customer |
Ensure you have a process to create high quality alerts and measure the quality of alerts. This allows you to learn lessons from past incidents and prioritize alerts for analysts, so they don’t waste time on false positives.
High quality alerts can be built based on experience from past incidents, validated community sources, and tools designed to generate and clean up alerts by fusing and correlating diverse signal sources.
Azure Security Center provides high quality alerts across many Azure assets. You can use the ASC data connector to stream the alerts to Azure Sentinel. Azure Sentinel lets you create advanced alert rules to generate incidents automatically for an investigation.
Export your Azure Security Center alerts and recommendations using the export feature to help identify risks to Azure resources. Export alerts and recommendations either manually or in an ongoing, continuous fashion.
How to configure export: https://docs.microsoft.com/azure/security-center/continuous-export
How to stream alerts into Azure Sentinel: https://docs.microsoft.com/azure/sentinel/connect-azure-security-center |
n/a |
link |
8 |
Azure_Security_Benchmark_v2.0 |
IR-5 |
Azure_Security_Benchmark_v2.0_IR-5 |
Azure Security Benchmark IR-5 |
Incident Response |
Detection and analysis - prioritize incidents |
Customer |
Provide context to analysts on which incidents to focus on first based on alert severity and asset sensitivity.
Azure Security Center assigns a severity to each alert to help you prioritize which alerts should be investigated first. The severity is based on how confident Security Center is in the finding or the analytic used to issue the alert, as well as the confidence level that there was malicious intent behind the activity that led to the alert.
Additionally, mark resources using tags and create a naming system to identify and categorize Azure resources, especially those processing sensitive data. It is your responsibility to prioritize the remediation of alerts based on the criticality of the Azure resources and environment where the incident occurred.
Security alerts in Azure Security Center: https://docs.microsoft.com/azure/security-center/security-center-alerts-overview
Use tags to organize your Azure resources: https://docs.microsoft.com/azure/azure-resource-manager/resource-group-using-tags |
n/a |
link |
8 |
Azure_Security_Benchmark_v2.0 |
LT-1 |
Azure_Security_Benchmark_v2.0_LT-1 |
Azure Security Benchmark LT-1 |
Logging and Threat Detection |
Enable threat detection for Azure resources |
Customer |
Ensure you are monitoring different types of Azure assets for potential threats and anomalies. Focus on getting high quality alerts to reduce false positives for analysts to sort through. Alerts can be sourced from log data, agents, or other data.
Use the Azure Security Center built-in threat detection capability, which is based on monitoring Azure service telemetry and analyzing service logs. Data is collected using the Log Analytics agent, which reads various security-related configurations and event logs from the system and copies the data to your workspace for analysis.
In addition, use Azure Sentinel to build analytics rules, which hunt threats that match specific criteria across your environment. The rules generate incidents when the criteria are matched, so that you can investigate each incident. Azure Sentinel can also import third party threat intelligence to enhance its threat detection capability.
Threat protection in Azure Security Center: https://docs.microsoft.com/azure/security-center/threat-protection
Azure Security Center security alerts reference guide: https://docs.microsoft.com/azure/security-center/alerts-reference
Create custom analytics rules to detect threats: https://docs.microsoft.com/azure/sentinel/tutorial-detect-threats-custom
Cyber threat intelligence with Azure Sentinel: https://docs.microsoft.com/azure/architecture/example-scenario/data/sentinel-threat-intelligence |
n/a |
link |
8 |
Azure_Security_Benchmark_v2.0 |
LT-2 |
Azure_Security_Benchmark_v2.0_LT-2 |
Azure Security Benchmark LT-2 |
Logging and Threat Detection |
Enable threat detection for Azure identity and access management |
Customer |
Microsoft Entra ID provides the following user logs that can be viewed in Microsoft Entra ID reporting or integrated with Azure Monitor, Azure Sentinel or other SIEM/monitoring tools for more sophisticated monitoring and analytics use cases:
- Sign-ins - The sign-ins report provides information about the usage of managed applications and user sign-in activities.
- Audit logs - Provides traceability through logs for all changes done by various features within Microsoft Entra ID. Examples of audit logs include changes made to any resources within Microsoft Entra ID like adding or removing users, apps, groups, roles and policies.
- Risky sign-ins - A risky sign-in is an indicator for a sign-in attempt that might have been performed by someone who is not the legitimate owner of a user account.
- Users flagged for risk - A risky user is an indicator for a user account that might have been compromised.
Azure Security Center can also alert on certain suspicious activities such as an excessive number of failed authentication attempts, and deprecated accounts in the subscription. In addition to the basic security hygiene monitoring, Azure Security Center’s Threat Protection module can also collect more in-depth security alerts from individual Azure compute resources (such as virtual machines, containers, app service), data resources (such as SQL DB and storage), and Azure service layers. This capability allows you to see account anomalies inside the individual resources.
Audit activity reports in Microsoft Entra ID: https://docs.microsoft.com/azure/active-directory/reports-monitoring/concept-audit-logs
Enable Azure Identity Protection: https://docs.microsoft.com/azure/active-directory/identity-protection/overview-identity-protection
Threat protection in Azure Security Center: https://docs.microsoft.com/azure/security-center/threat-protection |
n/a |
link |
8 |
Azure_Security_Benchmark_v3.0 |
DP-8 |
Azure_Security_Benchmark_v3.0_DP-8 |
Microsoft cloud security benchmark DP-8 |
Data Protection |
Ensure security of key and certificate repository |
Shared |
**Security Principle:**
Ensure the security of the key vault service used for the cryptographic key and certificate lifecycle management. Harden your key vault service through access control, network security, logging and monitoring and backup to ensure keys and certificates are always protected using the maximum security.
**Azure Guidance:**
Secure your cryptographic keys and certificates by hardening your Azure Key Vault service through the following controls:
- Restrict the access to keys and certificates in Azure Key Vault using built-in access policies or Azure RBAC to ensure the least privileges principle are in place for management plane access and data plane access.
- Secure the Azure Key Vault using Private Link and Azure Firewall to ensure the minimal exposure of the service
- Ensure separation of duties is place for users who manages encryption keys not have the ability to access encrypted data, and vice versa.
- Use managed identity to access keys stored in the Azure Key Vault in your workload applications.
- Never have the keys stored in plaintext format outside of the Azure Key Vault.
- When purging data, ensure your keys are not deleted before the actual data, backups and archives are purged.
- Backup your keys and certificates using the Azure Key Vault. Enable soft delete and purge protection to avoid accidental deletion of keys.
- Turn on Azure Key Vault logging to ensure the critical management plane and data plane activities are logged.
**Implementation and additional context:**
Azure Key Vault overview:
https://docs.microsoft.com/azure/key-vault/general/overview
Azure Key Vault security best practices:
https://docs.microsoft.com/azure/key-vault/general/best-practices
Use managed identity to access Azure Key Vault:
https://docs.microsoft.com/azure/active-directory/managed-identities-azure-resources/tutorial-windows-vm-access-nonaad
|
n/a |
link |
6 |
Azure_Security_Benchmark_v3.0 |
IR-3 |
Azure_Security_Benchmark_v3.0_IR-3 |
Microsoft cloud security benchmark IR-3 |
Incident Response |
Detection and analysis - create incidents based on high-quality alerts |
Shared |
**Security Principle:**
Ensure you have a process to create high-quality alerts and measure the quality of alerts. This allows you to learn lessons from past incidents and prioritize alerts for analysts, so they don't waste time on false positives.
High-quality alerts can be built based on experience from past incidents, validated community sources, and tools designed to generate and clean up alerts by fusing and correlating diverse signal sources.
**Azure Guidance:**
Microsoft Defender for Cloud provides high-quality alerts across many Azure assets. You can use the Microsoft Defender for Cloud data connector to stream the alerts to Azure Sentinel. Azure Sentinel lets you create advanced alert rules to generate incidents automatically for an investigation.
Export your Microsoft Defender for Cloud alerts and recommendations using the export feature to help identify risks to Azure resources. Export alerts and recommendations either manually or in an ongoing, continuous fashion.
**Implementation and additional context:**
How to configure export:
https://docs.microsoft.com/azure/security-center/continuous-export
How to stream alerts into Azure Sentinel:
https://docs.microsoft.com/azure/sentinel/connect-azure-security-center |
n/a |
link |
18 |
Azure_Security_Benchmark_v3.0 |
IR-5 |
Azure_Security_Benchmark_v3.0_IR-5 |
AMicrosoft cloud security benchmark IR-5 |
Incident Response |
Detection and analysis - prioritize incidents |
Shared |
**Security Principle:**
Provide context to security operations teams to help them determine which incidents ought to first be focused on, based on alert severity and asset sensitivity defined in your organization’s incident response plan.
**Azure Guidance:**
Microsoft Defender for Cloud assigns a severity to each alert to help you prioritize which alerts should be investigated first. The severity is based on how confident Microsoft Defender for Cloud is in the finding or the analytics used to issue the alert, as well as the confidence level that there was malicious intent behind the activity that led to the alert.
Additionally, mark resources using tags and create a naming system to identify and categorize Azure resources, especially those processing sensitive data. It is your responsibility to prioritize the remediation of alerts based on the criticality of the Azure resources and environment where the incident occurred.
**Implementation and additional context:**
Security alerts in Microsoft Defender for Cloud:
https://docs.microsoft.com/azure/security-center/security-center-alerts-overview
Use tags to organize your Azure resources:
https://docs.microsoft.com/azure/azure-resource-manager/resource-group-using-tags |
n/a |
link |
18 |
Azure_Security_Benchmark_v3.0 |
LT-1 |
Azure_Security_Benchmark_v3.0_LT-1 |
Microsoft cloud security benchmark LT-1 |
Logging and Threat Detection |
Enable threat detection capabilities |
Shared |
**Security Principle:**
To support threat detection scenarios, monitor all known resource types for known and expected threats and anomalies. Configure your alert filtering and analytics rules to extract high-quality alerts from log data, agents, or other data sources to reduce false positives.
**Azure Guidance:**
Use the threat detection capability of Azure Defender services in Microsoft Defender for Cloud for the respective Azure services.
For threat detection not included in Azure Defender services, refer to the Azure Security Benchmark service baselines for the respective services to enable the threat detection or security alert capabilities within the service. Extract the alerts to your Azure Monitor or Azure Sentinel to build analytics rules, which hunt threats that match specific criteria across your environment.
For Operational Technology (OT) environments that include computers that control or monitor Industrial Control System (ICS) or Supervisory Control and Data Acquisition (SCADA) resources, use Defender for IoT to inventory assets and detect threats and vulnerabilities.
For services that do not have a native threat detection capability, consider collecting the data plane logs and analyze the threats through Azure Sentinel.
**Implementation and additional context:**
Introduction to Azure Defender:
https://docs.microsoft.com/azure/security-center/azure-defender
Microsoft Defender for Cloud security alerts reference guide:
https://docs.microsoft.com/azure/security-center/alerts-reference
Create custom analytics rules to detect threats:
https://docs.microsoft.com/azure/sentinel/tutorial-detect-threats-custom
Cyber threat intelligence with Azure Sentinel:
https://docs.microsoft.com/azure/architecture/example-scenario/data/sentinel-threat-intelligence |
n/a |
link |
21 |
Azure_Security_Benchmark_v3.0 |
LT-2 |
Azure_Security_Benchmark_v3.0_LT-2 |
Microsoft cloud security benchmark LT-2 |
Logging and Threat Detection |
Enable threat detection for identity and access management |
Shared |
**Security Principle:**
Detect threats for identities and access management by monitoring the user and application sign-in and access anomalies. Behavioral patterns such as excessive number of failed login attempts, and deprecated accounts in the subscription, should be alerted.
**Azure Guidance:**
Microsoft Entra ID provides the following logs that can be viewed in Microsoft Entra reporting or integrated with Azure Monitor, Azure Sentinel or other SIEM/monitoring tools for more sophisticated monitoring and analytics use cases:
- Sign-ins: The sign-ins report provides information about the usage of managed applications and user sign-in activities.
- Audit logs: Provides traceability through logs for all changes done by various features within Microsoft Entra ID. Examples of audit logs include changes made to any resources within Microsoft Entra ID like adding or removing users, apps, groups, roles and policies.
- Risky sign-ins: A risky sign-in is an indicator for a sign-in attempt that might have been performed by someone who is not the legitimate owner of a user account.
- Users flagged for risk: A risky user is an indicator for a user account that might have been compromised.
Microsoft Entra ID also provides an Identity Protection module to detect, and remediate risks related to user accounts and sign-in behaviors. Examples risks include leaked credentials, sign-in from anonymous or malware linked IP addresses, password spray. The policies in the Microsoft Entra Identity Protection allow you to enforce risk-based MFA authentication in conjunction with Azure Conditional Access on user accounts.
In addition, Microsoft Defender for Cloud can be configured to alert on deprecated accounts in the subscription and suspicious activities such as an excessive number of failed authentication attempts. In addition to the basic security hygiene monitoring, Microsoft Defender for Cloud's Threat Protection module can also collect more in-depth security alerts from individual Azure compute resources (such as virtual machines, containers, app service), data resources (such as SQL DB and storage), and Azure service layers. This capability allows you to see account anomalies inside the individual resources.
Note: If you are connecting your on-premises Active Directory for synchronization, use the Microsoft Defender for Identity solution to consume your on-premises Active Directory signals to identify, detect, and investigate advanced threats, compromised identities, and malicious insider actions directed at your organization.
**Implementation and additional context:**
Audit activity reports in Microsoft Entra ID:
https://docs.microsoft.com/azure/active-directory/reports-monitoring/concept-audit-logs
Enable Azure Identity Protection:
https://docs.microsoft.com/azure/active-directory/identity-protection/overview-identity-protection
Threat protection in Microsoft Defender for Cloud:
https://docs.microsoft.com/azure/security-center/threat-protection |
n/a |
link |
20 |
|
C.04.3 - Timelines |
C.04.3 - Timelines |
404 not found |
|
|
|
n/a |
n/a |
|
20 |
|
C.04.6 - Timelines |
C.04.6 - Timelines |
404 not found |
|
|
|
n/a |
n/a |
|
20 |
|
C.04.7 - Evaluated |
C.04.7 - Evaluated |
404 not found |
|
|
|
n/a |
n/a |
|
39 |
Canada_Federal_PBMM_3-1-2020 |
CA_3 |
Canada_Federal_PBMM_3-1-2020_CA_3 |
Canada Federal PBMM 3-1-2020 CA 3 |
Information System Connections |
System Interconnections |
Shared |
1. The organization authorizes connection from information system to other information system through the use of Interconnection Security Agreements.
2. The organization documents, for each interconnection, the interface characteristics, security requirements, and the nature of the information communicated.
3. The organization reviews and updates Interconnection Security Agreements annually. |
To establish and maintain secure connections between information systems. |
|
77 |
Canada_Federal_PBMM_3-1-2020 |
CA_3(3) |
Canada_Federal_PBMM_3-1-2020_CA_3(3) |
Canada Federal PBMM 3-1-2020 CA 3(3) |
Information System Connections |
System Interconnections | Classified Non-National Security System Connections |
Shared |
The organization prohibits the direct connection of any internal network or system to an external network without the use of security controls approved by the information owner. |
To ensure the integrity and security of internal systems against external threats. |
|
77 |
Canada_Federal_PBMM_3-1-2020 |
CA_3(5) |
Canada_Federal_PBMM_3-1-2020_CA_3(5) |
Canada Federal PBMM 3-1-2020 CA 3(5) |
Information System Connections |
System Interconnections | Restrictions on External Network Connections |
Shared |
The organization employs allow-all, deny-by-exception; deny-all policy for allowing any systems to connect to external information systems. |
To enhance security posture against unauthorized access. |
|
77 |
Canada_Federal_PBMM_3-1-2020 |
CA_7 |
Canada_Federal_PBMM_3-1-2020_CA_7 |
Canada Federal PBMM 3-1-2020 CA 7 |
Continuous Monitoring |
Continuous Monitoring |
Shared |
1. The organization develops a continuous monitoring strategy and implements a continuous monitoring program that includes establishment of organization-defined metrics to be monitored.
2. The organization develops a continuous monitoring strategy and implements a continuous monitoring program that includes establishment of at least monthly monitoring and assessments of at least operating system scans, database, and web application scan.
3. The organization develops a continuous monitoring strategy and implements a continuous monitoring program that includes ongoing security control assessments in accordance with the organizational continuous monitoring strategy.
4. The organization develops a continuous monitoring strategy and implements a continuous monitoring program that includes ongoing security status monitoring of organization-defined metrics in accordance with the organizational continuous monitoring strategy.
5. The organization develops a continuous monitoring strategy and implements a continuous monitoring program that includes correlation and analysis of security-related information generated by assessments and monitoring.
6. The organization develops a continuous monitoring strategy and implements a continuous monitoring program that includes response actions to address results of the analysis of security-related information.
7. The organization develops a continuous monitoring strategy and implements a continuous monitoring program that includes reporting the security status of organization and the information system to organization-defined personnel or roles at organization-defined frequency. |
To ensure the ongoing effectiveness of security controls and maintain the security posture in alignment with organizational objectives and requirements. |
|
125 |
Canada_Federal_PBMM_3-1-2020 |
CM_8(3) |
Canada_Federal_PBMM_3-1-2020_CM_8(3) |
Canada Federal PBMM 3-1-2020 CM 8(3) |
Information System Component Inventory |
Information System Component Inventory | Automated Unauthorized Component Detection |
Shared |
1. The organization employs automated mechanisms continuously, using automated mechanisms with a maximum five-minute delay in detection to detect the presence of unauthorized hardware, software, and firmware components within the information system; and
2. The organization takes the organization-defined actions when unauthorized components are detected such as disables network access by such components; isolates the components; notifies organization-defined personnel or roles. |
To employ automated mechanisms for timely detection of unauthorized hardware, software, and firmware components in the information system. |
|
17 |
Canada_Federal_PBMM_3-1-2020 |
CM_8(5) |
Canada_Federal_PBMM_3-1-2020_CM_8(5) |
Canada Federal PBMM 3-1-2020 CM 8(5) |
Information System Component Inventory |
Information System Component Inventory | No Duplicate Accounting of Components |
Shared |
The organization verifies that all components within the authorization boundary of the information system are not duplicated in other information system component inventories. |
To ensure that all components within the authorization boundary of the information system are uniquely identified and not duplicated in other information system component inventories. |
|
17 |
Canada_Federal_PBMM_3-1-2020 |
SI_3 |
Canada_Federal_PBMM_3-1-2020_SI_3 |
Canada Federal PBMM 3-1-2020 SI 3 |
Malicious Code Protection |
Malicious Code Protection |
Shared |
1. The organization employs malicious code protection mechanisms at information system entry and exit points to detect and eradicate malicious code.
2. The organization updates malicious code protection mechanisms whenever new releases are available in accordance with organizational configuration management policy and procedures.
3. The organization configures malicious code protection mechanisms to:
a. Perform periodic scans of the information system at least weekly and real-time scans of files from external sources at endpoints and network entry/exit points as the files are downloaded, opened, or executed in accordance with organizational security policy; and
b. Block and quarantine malicious code; send alert to the key role as defined in the system and information integrity policy in response to malicious code detection.
4. The organization addresses the receipt of false positives during malicious code detection and eradication and the resulting potential impact on the availability of the information system. |
To mitigate potential impacts on system availability. |
|
52 |
Canada_Federal_PBMM_3-1-2020 |
SI_3(1) |
Canada_Federal_PBMM_3-1-2020_SI_3(1) |
Canada Federal PBMM 3-1-2020 SI 3(1) |
Malicious Code Protection |
Malicious Code Protection | Central Management |
Shared |
The organization centrally manages malicious code protection mechanisms. |
To centrally manage malicious code protection mechanisms. |
|
51 |
Canada_Federal_PBMM_3-1-2020 |
SI_3(2) |
Canada_Federal_PBMM_3-1-2020_SI_3(2) |
Canada Federal PBMM 3-1-2020 SI 3(2) |
Malicious Code Protection |
Malicious Code Protection | Automatic Updates |
Shared |
The information system automatically updates malicious code protection mechanisms. |
To ensure automatic updates in malicious code protection mechanisms. |
|
51 |
Canada_Federal_PBMM_3-1-2020 |
SI_3(7) |
Canada_Federal_PBMM_3-1-2020_SI_3(7) |
Canada Federal PBMM 3-1-2020 SI 3(7) |
Malicious Code Protection |
Malicious Code Protection | Non Signature-Based Detection |
Shared |
The information system implements non-signature-based malicious code detection mechanisms. |
To enhance overall security posture.
|
|
51 |
Canada_Federal_PBMM_3-1-2020 |
SI_4 |
Canada_Federal_PBMM_3-1-2020_SI_4 |
Canada Federal PBMM 3-1-2020 SI 4 |
Information System Monitoring |
Information System Monitoring |
Shared |
1. The organization monitors the information system to detect:
a. Attacks and indicators of potential attacks in accordance with organization-defined monitoring objectives; and
b. Unauthorized local, network, and remote connections;
2. The organization identifies unauthorized use of the information system through organization-defined techniques and methods.
3. The organization deploys monitoring devices: (i) strategically within the information system to collect organization-determined essential information; and (ii) at ad hoc locations within the system to track specific types of transactions of interest to the organization.
4. The organization protects information obtained from intrusion-monitoring tools from unauthorized access, modification, and deletion.
5. The organization heightens the level of information system monitoring activity whenever there is an indication of increased risk to organizational operations and assets, individuals, other organizations, or Canada based on law enforcement information, intelligence information, or other credible sources of information.
6. The organization obtains legal opinion with regard to information system monitoring activities in accordance with organizational policies, directives and standards.
7. The organization provides organization-defined information system monitoring information to organization-defined personnel or roles at an organization-defined frequency. |
To enhance overall security posture.
|
|
95 |
Canada_Federal_PBMM_3-1-2020 |
SI_4(1) |
Canada_Federal_PBMM_3-1-2020_SI_4(1) |
Canada Federal PBMM 3-1-2020 SI 4(1) |
Information System Monitoring |
Information System Monitoring | System-Wide Intrusion Detection System |
Shared |
The organization connects and configures individual intrusion detection tools into an information system-wide intrusion detection system. |
To enhance overall security posture.
|
|
95 |
Canada_Federal_PBMM_3-1-2020 |
SI_4(2) |
Canada_Federal_PBMM_3-1-2020_SI_4(2) |
Canada Federal PBMM 3-1-2020 SI 4(2) |
Information System Monitoring |
Information System Monitoring | Automated Tools for Real-Time Analysis |
Shared |
The organization employs automated tools to support near real-time analysis of events. |
To enhance overall security posture.
|
|
94 |
Canada_Federal_PBMM_3-1-2020 |
SI_8(1) |
Canada_Federal_PBMM_3-1-2020_SI_8(1) |
Canada Federal PBMM 3-1-2020 SI 8(1) |
Spam Protection |
Spam Protection | Central Management of Protection Mechanisms |
Shared |
The organization centrally manages spam protection mechanisms. |
To enhance overall security posture. |
|
88 |
CIS_Azure_1.1.0 |
2.1 |
CIS_Azure_1.1.0_2.1 |
CIS Microsoft Azure Foundations Benchmark recommendation 2.1 |
2 Security Center |
Ensure that standard pricing tier is selected |
Shared |
The customer is responsible for implementing this recommendation. |
The standard pricing tier enables threat detection for networks and virtual machines, providing threat intelligence, anomaly detection, and behavior analytics in the Azure Security Center. |
link |
15 |
CIS_Azure_1.3.0 |
2.8 |
CIS_Azure_1.3.0_2.8 |
CIS Microsoft Azure Foundations Benchmark recommendation 2.8 |
2 Security Center |
Ensure that Azure Defender is set to On for Key Vault |
Shared |
The customer is responsible for implementing this recommendation. |
Turning on Azure Defender enables threat detection for Key Vault, providing threat intelligence, anomaly detection, and behavior analytics in the Azure Security Center. |
link |
9 |
CIS_Azure_1.4.0 |
2.8 |
CIS_Azure_1.4.0_2.8 |
CIS Microsoft Azure Foundations Benchmark recommendation 2.8 |
2 Microsoft Defender for Cloud |
Ensure that Microsoft Defender for Key Vault is set to 'On' |
Shared |
The customer is responsible for implementing this recommendation. |
Turning on Microsoft Defender for Key Vault enables threat detection for Key Vault, providing threat intelligence, anomaly detection, and behavior analytics in the Microsoft Defender for Cloud. |
link |
9 |
CIS_Azure_2.0.0 |
2.1.10 |
CIS_Azure_2.0.0_2.1.10 |
CIS Microsoft Azure Foundations Benchmark recommendation 2.1.10 |
2.1 |
Ensure That Microsoft Defender for Key Vault Is Set To 'On' |
Shared |
Turning on Microsoft Defender for Key Vault incurs an additional cost per resource. |
Turning on Microsoft Defender for Key Vault enables threat detection for Key Vault, providing threat intelligence, anomaly detection, and behavior analytics in the Microsoft Defender for Cloud.
Enabling Microsoft Defender for Key Vault allows for greater defense-in-depth, with threat detection provided by the Microsoft Security Response Center (MSRC). |
link |
9 |
CMMC_2.0_L2 |
AU.L2-3.3.1 |
CMMC_2.0_L2_AU.L2-3.3.1 |
404 not found |
|
|
|
n/a |
n/a |
|
35 |
CMMC_2.0_L2 |
AU.L2-3.3.2 |
CMMC_2.0_L2_AU.L2-3.3.2 |
404 not found |
|
|
|
n/a |
n/a |
|
33 |
CMMC_2.0_L2 |
AU.L2-3.3.4 |
CMMC_2.0_L2_AU.L2-3.3.4 |
404 not found |
|
|
|
n/a |
n/a |
|
10 |
CMMC_2.0_L2 |
AU.L2-3.3.5 |
CMMC_2.0_L2_AU.L2-3.3.5 |
404 not found |
|
|
|
n/a |
n/a |
|
10 |
CMMC_2.0_L2 |
RA.L2-3.11.2 |
CMMC_2.0_L2_RA.L2-3.11.2 |
404 not found |
|
|
|
n/a |
n/a |
|
16 |
CMMC_2.0_L2 |
RA.L2-3.11.3 |
CMMC_2.0_L2_RA.L2-3.11.3 |
404 not found |
|
|
|
n/a |
n/a |
|
16 |
CMMC_2.0_L2 |
SI.L1-3.14.1 |
CMMC_2.0_L2_SI.L1-3.14.1 |
404 not found |
|
|
|
n/a |
n/a |
|
14 |
CMMC_2.0_L2 |
SI.L1-3.14.2 |
CMMC_2.0_L2_SI.L1-3.14.2 |
404 not found |
|
|
|
n/a |
n/a |
|
11 |
CMMC_2.0_L2 |
SI.L2-3.14.3 |
CMMC_2.0_L2_SI.L2-3.14.3 |
404 not found |
|
|
|
n/a |
n/a |
|
11 |
CMMC_2.0_L2 |
SI.L2-3.14.6 |
CMMC_2.0_L2_SI.L2-3.14.6 |
404 not found |
|
|
|
n/a |
n/a |
|
25 |
CMMC_2.0_L2 |
SI.L2-3.14.7 |
CMMC_2.0_L2_SI.L2-3.14.7 |
404 not found |
|
|
|
n/a |
n/a |
|
19 |
CMMC_L2_v1.9.0 |
AU.L2_3.3.1 |
CMMC_L2_v1.9.0_AU.L2_3.3.1 |
Cybersecurity Maturity Model Certification (CMMC) Level 2 v1.9.0 AU.L2 3.3.1 |
Audit and Accountability |
System Auditing |
Shared |
Create and retain system audit logs and records to the extent needed to enable the monitoring, analysis, investigation, and reporting of unlawful or unauthorized system activity. |
To enhance security and accountability measures. |
|
41 |
CMMC_L2_v1.9.0 |
CA.L2_3.12.2 |
CMMC_L2_v1.9.0_CA.L2_3.12.2 |
Cybersecurity Maturity Model Certification (CMMC) Level 2 v1.9.0 CA.L2 3.12.2 |
Security Assessment |
Plan of Action |
Shared |
Develop and implement plans of action designed to correct deficiencies and reduce or eliminate vulnerabilities in organizational systems. |
To enhance the resilience to cyber threats and protect systems and data from potential exploitation or compromise. |
|
17 |
CMMC_L2_v1.9.0 |
CM.L2_3.4.3 |
CMMC_L2_v1.9.0_CM.L2_3.4.3 |
Cybersecurity Maturity Model Certification (CMMC) Level 2 v1.9.0 CM.L2 3.4.3 |
Configuration Management |
System Change Management |
Shared |
Track, review, approve or disapprove, and log changes to organizational systems. |
To ensure accountability, transparency, and compliance with established procedures and security requirements. |
|
15 |
CMMC_L2_v1.9.0 |
SC.L1_3.13.1 |
CMMC_L2_v1.9.0_SC.L1_3.13.1 |
Cybersecurity Maturity Model Certification (CMMC) Level 2 v1.9.0 SC.L1 3.13.1 |
System and Communications Protection |
Boundary Protection |
Shared |
Monitor, control, and protect organizational communications (i.e., information transmitted or received by organizational information systems) at the external boundaries and key internal boundaries of the information systems. |
To protect information assets from external attacks and insider threats. |
|
43 |
CMMC_L2_v1.9.0 |
SC.L1_3.13.5 |
CMMC_L2_v1.9.0_SC.L1_3.13.5 |
Cybersecurity Maturity Model Certification (CMMC) Level 2 v1.9.0 SC.L1 3.13.5 |
System and Communications Protection |
Public Access System Separation |
Shared |
Implement subnetworks for publicly accessible system components that are physically or logically separated from internal networks. |
To control access, monitor traffic, and mitigate the risk of unauthorized access or exploitation of internal resources. |
|
43 |
CMMC_L2_v1.9.0 |
SI.L1_3.14.2 |
CMMC_L2_v1.9.0_SI.L1_3.14.2 |
Cybersecurity Maturity Model Certification (CMMC) Level 2 v1.9.0 SI.L1 3.14.2 |
System and Information Integrity |
Malicious Code Protection |
Shared |
Provide protection from malicious code at appropriate locations within organizational information systems. |
To the integrity, confidentiality, and availability of information assets. |
|
19 |
CMMC_L2_v1.9.0 |
SI.L1_3.14.4 |
CMMC_L2_v1.9.0_SI.L1_3.14.4 |
Cybersecurity Maturity Model Certification (CMMC) Level 2 v1.9.0 SI.L1 3.14.4 |
System and Information Integrity |
Update Malicious Code Protection |
Shared |
Update malicious code protection mechanisms when new releases are available. |
To effectively defend against new and evolving malware threats, minimize the risk of infections, and maintain the security of their information systems and data. |
|
19 |
CMMC_L2_v1.9.0 |
SI.L1_3.14.5 |
CMMC_L2_v1.9.0_SI.L1_3.14.5 |
Cybersecurity Maturity Model Certification (CMMC) Level 2 v1.9.0 SI.L1 3.14.5 |
System and Information Integrity |
System & File Scanning |
Shared |
Perform periodic scans of the information system and real time scans of files from external sources as files are downloaded, opened, or executed. |
To identify and mitigate security risks, prevent malware infections and minimise the impact of security breaches. |
|
19 |
CMMC_L3 |
IR.2.093 |
CMMC_L3_IR.2.093 |
CMMC L3 IR.2.093 |
Incident Response |
Detect and report events. |
Shared |
Microsoft and the customer share responsibilities for implementing this requirement. |
The monitoring, identification, and reporting of events are the foundation for incident identification and commence the incident life cycle. Events potentially affect the productivity of organizational assets and, in turn, associated services. These events must be captured and analyzed so that the organization can determine whether an event will become (or has become) an incident that requires organizational action. The extent to which an organization can identify events improves its ability to manage and control incidents and their potential effects. |
link |
17 |
CMMC_L3 |
RM.2.141 |
CMMC_L3_RM.2.141 |
CMMC L3 RM.2.141 |
Risk Assessment |
Periodically assess the risk to organizational operations (including mission, functions, image, or reputation), organizational assets, and individuals, resulting from the operation of organizational systems and the associated processing, storage, or transmission of CUI. |
Shared |
Microsoft and the customer share responsibilities for implementing this requirement. |
Clearly defined system boundaries are a prerequisite for effective risk assessments. Such risk assessments consider threats, vulnerabilities, likelihood, and impact to organizational operations, organizational assets, and individuals based on the operation and use of organizational systems. Risk assessments also consider risk from external parties (e.g., service providers, contractors operating systems on behalf of the organization, individuals accessing organizational systems, outsourcing entities). Risk assessments, either formal or informal, can be conducted at the organization level, the mission or business process level, or the system level, and at any phase in the system development life cycle. |
link |
13 |
CMMC_L3 |
RM.2.142 |
CMMC_L3_RM.2.142 |
CMMC L3 RM.2.142 |
Risk Assessment |
Scan for vulnerabilities in organizational systems and applications periodically and when new vulnerabilities affecting those systems and applications are identified. |
Shared |
Microsoft and the customer share responsibilities for implementing this requirement. |
Organizations determine the required vulnerability scanning for all system components, ensuring that potential sources of vulnerabilities such as networked printers, scanners, and copiers are not overlooked. The vulnerabilities to be scanned are readily updated as new vulnerabilities are discovered, announced, and scanning methods developed. This process ensures that potential vulnerabilities in the system are identified and addressed as quickly as possible. Vulnerability analyses for custom software applications may require additional approaches such as static analysis, dynamic analysis, binary analysis, or a hybrid of the three approaches. Organizations can employ these analysis approaches in source code reviews and in a variety of tools (e.g., static analysis tools, web-based application scanners, binary analyzers) and in source code reviews. Vulnerability scanning includes: scanning for patch levels; scanning for functions, ports, protocols, and services that should not be accessible to users or devices; and scanning for improperly configured or incorrectly operating information flow control mechanisms.
To facilitate interoperability, organizations consider using products that are Security Content Automated Protocol (SCAP)-validated, scanning tools that express vulnerabilities in the Common Vulnerabilities and Exposures (CVE) naming convention, and that employ the Open Vulnerability Assessment Language (OVAL) to determine the presence of system vulnerabilities. Sources for vulnerability information include the Common Weakness Enumeration (CWE) listing and the National Vulnerability Database (NVD).
Security assessments, such as red team exercises, provide additional sources of potential vulnerabilities for which to scan. Organizations also consider using scanning tools that express vulnerability impact by the Common Vulnerability Scoring System (CVSS). In certain situations, the nature of the vulnerability scanning may be more intrusive or the system component that is the subject of the scanning may contain highly sensitive information. Privileged access authorization to selected system components facilitates thorough vulnerability scanning and protects the sensitive nature of such scanning. |
link |
13 |
CMMC_L3 |
RM.2.143 |
CMMC_L3_RM.2.143 |
CMMC L3 RM.2.143 |
Risk Assessment |
Remediate vulnerabilities in accordance with risk assessments. |
Shared |
Microsoft and the customer share responsibilities for implementing this requirement. |
Vulnerabilities discovered, for example, via the scanning conducted in response to RM.2.142, are remediated with consideration of the related assessment of risk. The consideration of risk influences the prioritization of remediation efforts and the level of effort to be expended in the remediation for specific vulnerabilities. |
link |
15 |
CMMC_L3 |
RM.3.144 |
CMMC_L3_RM.3.144 |
CMMC L3 RM.3.144 |
Risk Management |
Periodically perform risk assessments to identify and prioritize risks according to the defined risk categories, risk sources and risk measurement criteria. |
Shared |
Microsoft and the customer share responsibility for implementing this requirement. |
Organizations must evaluate potential cybersecurity risks to operations, assets, and individuals. |
link |
8 |
CMMC_L3 |
SC.3.187 |
CMMC_L3_SC.3.187 |
CMMC L3 SC.3.187 |
System and Communications Protection |
Establish and manage cryptographic keys for cryptography employed in organizational systems. |
Shared |
Microsoft and the customer share responsibilities for implementing this requirement. |
Cryptographic key management and establishment can be performed using manual procedures or mechanisms supported by manual procedures. Organizations define key management requirements in accordance with applicable federal laws, Executive Orders, policies, directives, regulations, and standards specifying appropriate options, levels, and parameters. |
link |
8 |
CMMC_L3 |
SI.1.213 |
CMMC_L3_SI.1.213 |
CMMC L3 SI.1.213 |
System and Information Integrity |
Perform periodic scans of the information system and real-time scans of files from external sources as files are downloaded, opened, or executed. |
Shared |
Microsoft and the customer share responsibilities for implementing this requirement. |
Periodic scans of organizational systems and real-time scans of files from external sources can detect malicious code. Malicious code can be encoded in various formats (e.g., UUENCODE, Unicode), contained within compressed or hidden files, or hidden in files using techniques such as steganography. Malicious code can be inserted into systems in a variety of ways including web accesses, electronic mail, electronic mail attachments, and portable storage devices. Malicious code insertions occur through the exploitation of system vulnerabilities. |
link |
9 |
CMMC_L3 |
SI.2.216 |
CMMC_L3_SI.2.216 |
CMMC L3 SI.2.216 |
System and Information Integrity |
Monitor organizational systems, including inbound and outbound communications traffic, to detect attacks and indicators of potential attacks. |
Shared |
Microsoft and the customer share responsibilities for implementing this requirement. |
System monitoring includes external and internal monitoring. External monitoring includes the observation of events occurring at the system boundary (i.e., part of perimeter defense and boundary protection). Internal monitoring includes the observation of events occurring within the system. Organizations can monitor systems, for example, by observing audit record activities in real time or by observing other system aspects such as access patterns, characteristics of access, and other actions. The monitoring objectives may guide determination of the events. System monitoring capability is achieved through a variety of tools and techniques (e.g., intrusion detection systems, intrusion prevention systems, malicious code protection software, scanning tools, audit record monitoring software, network monitoring software). Strategic locations for monitoring devices include selected perimeter locations and near server farms supporting critical applications, with such devices being employed at managed system interfaces. The granularity of monitoring information collected is based on organizational monitoring objectives and the capability of systems to support such objectives.
System monitoring is an integral part of continuous monitoring and incident response programs. Output from system monitoring serves as input to continuous monitoring and incident response programs. A network connection is any connection with a device that communicates through a network (e.g., local area network, Internet). A remote connection is any connection with a device communicating through an external network (e.g., the Internet). Local, network, and remote connections can be either wired or wireless.
Unusual or unauthorized activities or conditions related to inbound/outbound communications traffic include internal traffic that indicates the presence of malicious code in systems or propagating among system components, the unauthorized exporting of information, or signaling to external systems. Evidence of malicious code is used to identify potentially compromised systems or system components. System monitoring requirements, including the need for specific types of system monitoring, may be referenced in other requirements. |
link |
23 |
CPS_234_(APRA)_2019 |
CPS_234_(APRA)_2019_27 |
CPS_234_(APRA)_2019_27 |
APRA CPS 234 2019 27 |
Testing control effectiveness |
To ensure that an APRA-regulated entity systematically tests the effectiveness of its information security controls. |
Shared |
n/a |
An APRA-regulated entity must test the effectiveness of its information security controls through a systematic testing program. The nature and frequency of the systematic testing must be commensurate with:
1. the rate at which the vulnerabilities and threats change;
2. the criticality and sensitivity of the information asset;
3. the consequences of an information security incident;
4. the risks associated with exposure to environments where the APRA-regulated entity is unable to enforce its information security policies;
5. the materiality and frequency of change to information assets. |
|
17 |
CSA_v4.0.12 |
DSP_05 |
CSA_v4.0.12_DSP_05 |
CSA Cloud Controls Matrix v4.0.12 DSP 05 |
Data Security and Privacy Lifecycle Management |
Data Flow Documentation |
Shared |
n/a |
Create data flow documentation to identify what data is processed,
stored or transmitted where. Review data flow documentation at defined intervals,
at least annually, and after any change. |
|
57 |
CSA_v4.0.12 |
DSP_10 |
CSA_v4.0.12_DSP_10 |
CSA Cloud Controls Matrix v4.0.12 DSP 10 |
Data Security and Privacy Lifecycle Management |
Sensitive Data Transfer |
Shared |
n/a |
Define, implement and evaluate processes, procedures and technical
measures that ensure any transfer of personal or sensitive data is protected
from unauthorized access and only processed within scope as permitted by the
respective laws and regulations. |
|
45 |
Cyber_Essentials_v3.1 |
3 |
Cyber_Essentials_v3.1_3 |
Cyber Essentials v3.1 3 |
Cyber Essentials |
Security Update Management |
Shared |
n/a |
Aim: ensure that devices and software are not vulnerable to known security issues for which fixes are available. |
|
38 |
Cyber_Essentials_v3.1 |
5 |
Cyber_Essentials_v3.1_5 |
Cyber Essentials v3.1 5 |
Cyber Essentials |
Malware protection |
Shared |
n/a |
Aim: to restrict execution of known malware and untrusted software, from causing damage or accessing data. |
|
60 |
EU_2555_(NIS2)_2022 |
EU_2555_(NIS2)_2022_11 |
EU_2555_(NIS2)_2022_11 |
EU 2022/2555 (NIS2) 2022 11 |
|
Requirements, technical capabilities and tasks of CSIRTs |
Shared |
n/a |
Outlines the requirements, technical capabilities, and tasks of CSIRTs. |
|
69 |
EU_2555_(NIS2)_2022 |
EU_2555_(NIS2)_2022_12 |
EU_2555_(NIS2)_2022_12 |
EU 2022/2555 (NIS2) 2022 12 |
|
Coordinated vulnerability disclosure and a European vulnerability database |
Shared |
n/a |
Establishes a coordinated vulnerability disclosure process and a European vulnerability database. |
|
67 |
EU_2555_(NIS2)_2022 |
EU_2555_(NIS2)_2022_21 |
EU_2555_(NIS2)_2022_21 |
EU 2022/2555 (NIS2) 2022 21 |
|
Cybersecurity risk-management measures |
Shared |
n/a |
Requires essential and important entities to take appropriate measures to manage cybersecurity risks. |
|
194 |
EU_2555_(NIS2)_2022 |
EU_2555_(NIS2)_2022_29 |
EU_2555_(NIS2)_2022_29 |
EU 2022/2555 (NIS2) 2022 29 |
|
Cybersecurity information-sharing arrangements |
Shared |
n/a |
Allows entities to exchange relevant cybersecurity information on a voluntary basis. |
|
67 |
EU_GDPR_2016_679_Art. |
24 |
EU_GDPR_2016_679_Art._24 |
EU General Data Protection Regulation (GDPR) 2016/679 Art. 24 |
Chapter 4 - Controller and processor |
Responsibility of the controller |
Shared |
n/a |
n/a |
|
311 |
EU_GDPR_2016_679_Art. |
25 |
EU_GDPR_2016_679_Art._25 |
EU General Data Protection Regulation (GDPR) 2016/679 Art. 25 |
Chapter 4 - Controller and processor |
Data protection by design and by default |
Shared |
n/a |
n/a |
|
311 |
EU_GDPR_2016_679_Art. |
28 |
EU_GDPR_2016_679_Art._28 |
EU General Data Protection Regulation (GDPR) 2016/679 Art. 28 |
Chapter 4 - Controller and processor |
Processor |
Shared |
n/a |
n/a |
|
311 |
EU_GDPR_2016_679_Art. |
32 |
EU_GDPR_2016_679_Art._32 |
EU General Data Protection Regulation (GDPR) 2016/679 Art. 32 |
Chapter 4 - Controller and processor |
Security of processing |
Shared |
n/a |
n/a |
|
311 |
FBI_Criminal_Justice_Information_Services_v5.9.5_5 |
.1 |
FBI_Criminal_Justice_Information_Services_v5.9.5_5.1 |
FBI Criminal Justice Information Services (CJIS) v5.9.5 5.1 |
Policy and Implementation - Systems And Communications Protection |
Systems And Communications Protection |
Shared |
In addition, applications, services, or information systems must have the capability to ensure system integrity through the detection and protection against unauthorized changes to software and information. |
Examples of systems and communications safeguards range from boundary and transmission protection to securing an agency's virtualized environment. |
|
111 |
FBI_Criminal_Justice_Information_Services_v5.9.5_5 |
.11 |
FBI_Criminal_Justice_Information_Services_v5.9.5_5.11 |
FBI Criminal Justice Information Services (CJIS) v5.9.5 5.11 |
Policy and Implementation - Formal Audits |
Policy Area 11: Formal Audits |
Shared |
Internal compliance checklists should be regularly kept updated with respect to applicable statutes, regulations, policies and on the basis of findings in audit. |
Formal audits are conducted to ensure compliance with applicable statutes, regulations and policies. |
|
65 |
FBI_Criminal_Justice_Information_Services_v5.9.5_5 |
.7 |
FBI_Criminal_Justice_Information_Services_v5.9.5_5.7 |
404 not found |
|
|
|
n/a |
n/a |
|
96 |
FedRAMP_High_R4 |
AC-2(12) |
FedRAMP_High_R4_AC-2(12) |
FedRAMP High AC-2 (12) |
Access Control |
Account Monitoring / Atypical Usage |
Shared |
n/a |
The organization:
(a) Monitors information system accounts for [Assignment: organization-defined atypical use]; and
(b) Reports atypical usage of information system accounts to [Assignment: organization-defined personnel or roles].
Supplemental Guidance: Atypical usage includes, for example, accessing information systems at certain times of the day and from locations that are not consistent with the normal usage patterns of individuals working in organizations. Related control: CA-7. |
link |
13 |
FedRAMP_High_R4 |
AU-12 |
FedRAMP_High_R4_AU-12 |
FedRAMP High AU-12 |
Audit And Accountability |
Audit Generation |
Shared |
n/a |
The information system:
a. Provides audit record generation capability for the auditable events defined in AU-2 a. at [Assignment: organization-defined information system components];
b. Allows [Assignment: organization-defined personnel or roles] to select which auditable events are to be audited by specific components of the information system; and
c. Generates audit records for the events defined in AU-2 d. with the content defined in AU-3.
Supplemental Guidance: Audit records can be generated from many different information system components. The list of audited events is the set of events for which audits are to be generated. These events are typically a subset of all events for which the information system is capable of generating audit records. Related controls: AC-3, AU-2, AU-3, AU-6, AU-7.
References: None. |
link |
34 |
FedRAMP_High_R4 |
AU-12(1) |
FedRAMP_High_R4_AU-12(1) |
FedRAMP High AU-12 (1) |
Audit And Accountability |
System-Wide / Time-Correlated Audit Trail |
Shared |
n/a |
The information system compiles audit records from [Assignment: organization-defined information system components] into a system-wide (logical or physical) audit trail that is time- correlated to within [Assignment: organization-defined level of tolerance for relationship between time stamps of individual records in the audit trail].
Supplemental Guidance: Audit trails are time-correlated if the time stamps in the individual audit records can be reliably related to the time stamps in other audit records to achieve a time ordering of the records within organizational tolerances. Related controls: AU-8, AU-12. |
link |
31 |
FedRAMP_High_R4 |
AU-6 |
FedRAMP_High_R4_AU-6 |
FedRAMP High AU-6 |
Audit And Accountability |
Audit Review, Analysis, And Reporting |
Shared |
n/a |
The organization:
a. Reviews and analyzes information system audit records [Assignment: organization-defined frequency] for indications of [Assignment: organization-defined inappropriate or unusual activity]; and
b. Reports findings to [Assignment: organization-defined personnel or roles].
Supplemental Guidance: Audit review, analysis, and reporting covers information security-related auditing performed by organizations including, for example, auditing that results from monitoring of account usage, remote access, wireless connectivity, mobile device connection, configuration settings, system component inventory, use of maintenance tools and nonlocal maintenance, physical access, temperature and humidity, equipment delivery and removal, communications at the information system boundaries, use of mobile code, and use of VoIP. Findings can be reported to organizational entities that include, for example, incident response team, help desk, information security group/department. If organizations are prohibited from reviewing and analyzing audit information or unable to conduct such activities (e.g., in certain national security applications or systems), the review/analysis may be carried out by other organizations granted such authority. Related controls: AC-2, AC-3, AC-6, AC-17, AT-3, AU-7, AU-16, CA-7, CM-5, CM-10, CM-11, IA-3, IA-5, IR-5, IR-6, MA-4, MP-4, PE-3, PE-6, PE-14, PE-16, RA-5, SC-7, SC-18, SC-19, SI-3, SI-4, SI-7.
References: None. |
link |
25 |
FedRAMP_High_R4 |
AU-6(4) |
FedRAMP_High_R4_AU-6(4) |
FedRAMP High AU-6 (4) |
Audit And Accountability |
Central Review And Analysis |
Shared |
n/a |
The information system provides the capability to centrally review and analyze audit records from multiple components within the system.
Supplemental Guidance: Automated mechanisms for centralized reviews and analyses include, for example, Security Information Management products. Related controls: AU-2, AU-12. |
link |
30 |
FedRAMP_High_R4 |
AU-6(5) |
FedRAMP_High_R4_AU-6(5) |
FedRAMP High AU-6 (5) |
Audit And Accountability |
Integration / Scanning And Monitoring Capabilities |
Shared |
n/a |
The organization integrates analysis of audit records with analysis of [Selection (one or more): vulnerability scanning information; performance data; information system monitoring information; [Assignment: organization-defined data/information collected from other sources]] to further enhance the ability to identify inappropriate or unusual activity.
Supplemental Guidance: This control enhancement does not require vulnerability scanning, the generation of performance data, or information system monitoring. Rather, the enhancement requires that the analysis of information being otherwise produced in these areas is integrated with the analysis of audit information. Security Event and Information Management System tools can facilitate audit record aggregation/consolidation from multiple information system components as well as audit record correlation and analysis. The use of standardized audit record analysis scripts developed by organizations (with localized script adjustments, as necessary) provides more cost-effective approaches for analyzing audit record information collected. The correlation of audit record information with vulnerability scanning information is important in determining the veracity of vulnerability scans and correlating attack detection events with scanning results. Correlation with performance data can help uncover denial of service attacks or cyber attacks resulting in unauthorized use of resources. Correlation with system monitoring information can assist in uncovering attacks and in better relating audit information to operational situations. Related controls: AU-12, IR-4, RA-5. |
link |
31 |
FedRAMP_High_R4 |
IR-4 |
FedRAMP_High_R4_IR-4 |
FedRAMP High IR-4 |
Incident Response |
Incident Handling |
Shared |
n/a |
The organization:
a. Implements an incident handling capability for security incidents that includes preparation, detection and analysis, containment, eradication, and recovery;
b. Coordinates incident handling activities with contingency planning activities; and
c. Incorporates lessons learned from ongoing incident handling activities into incident response procedures, training, and testing/exercises, and implements the resulting changes accordingly.
Supplemental Guidance: Organizations recognize that incident response capability is dependent on the capabilities of organizational information systems and the mission/business processes being supported by those systems. Therefore, organizations consider incident response as part of the definition, design, and development of mission/business processes and information systems. Incident-related information can be obtained from a variety of sources including, for example, audit monitoring, network monitoring, physical access monitoring, user/administrator reports, and reported supply chain events. Effective incident handling capability includes coordination among many organizational entities including, for example, mission/business owners, information system owners, authorizing officials, human resources offices, physical and personnel security offices, legal departments, operations personnel, procurement offices, and the risk executive (function). Related controls: AU-6, CM-6, CP-2, CP-4, IR-2, IR-3, IR-8, PE-6, SC-5, SC-7, SI-3, SI-4, SI-7.
References: Executive Order 13587; NIST Special Publication 800-61. |
link |
24 |
FedRAMP_High_R4 |
IR-5 |
FedRAMP_High_R4_IR-5 |
FedRAMP High IR-5 |
Incident Response |
Incident Monitoring |
Shared |
n/a |
The organization tracks and documents information system security incidents.
Supplemental Guidance: Documenting information system security incidents includes, for example, maintaining records about each incident, the status of the incident, and other pertinent information necessary for forensics, evaluating incident details, trends, and handling. Incident information can be obtained from a variety of sources including, for example, incident reports, incident response teams, audit monitoring, network monitoring, physical access monitoring, and user/administrator reports. Related controls: AU-6, IR-8, PE-6, SC-5, SC-7, SI-3, SI-4, SI-7.
References: NIST Special Publication 800-61. |
link |
13 |
FedRAMP_High_R4 |
RA-5 |
FedRAMP_High_R4_RA-5 |
FedRAMP High RA-5 |
Risk Assessment |
Vulnerability Scanning |
Shared |
n/a |
The organization:
a. Scans for vulnerabilities in the information system and hosted applications [Assignment: organization-defined frequency and/or randomly in accordance with organization-defined process] and when new vulnerabilities potentially affecting the system/applications are identified and reported;
b. Employs vulnerability scanning tools and techniques that facilitate interoperability among tools and automate parts of the vulnerability management process by using standards for:
1. Enumerating platforms, software flaws, and improper configurations;
2. Formatting checklists and test procedures; and
3. Measuring vulnerability impact;
c. Analyzes vulnerability scan reports and results from security control assessments;
d. Remediates legitimate vulnerabilities [Assignment: organization-defined response times], in accordance with an organizational assessment of risk; and
e. Shares information obtained from the vulnerability scanning process and security control assessments with [Assignment: organization-defined personnel or roles] to help eliminate similar vulnerabilities in other information systems (i.e., systemic weaknesses or deficiencies).
Supplemental Guidance: Security categorization of information systems guides the frequency and comprehensiveness of vulnerability scans. Organizations determine the required vulnerability scanning for all information system components, ensuring that potential sources of vulnerabilities such as networked printers, scanners, and copiers are not overlooked. Vulnerability analyses for custom software applications may require additional approaches such as static analysis, dynamic analysis, binary analysis, or a hybrid of the three approaches. Organizations can employ these analysis approaches in a variety of tools (e.g., web-based application scanners, static analysis tools, binary analyzers) and in source code reviews. Vulnerability scanning includes, for example: (i) scanning for patch levels; (ii) scanning for functions, ports, protocols, and services that should not be accessible to users or devices; and (iii) scanning for improperly configured or incorrectly operating information flow control mechanisms. Organizations consider using tools that express vulnerabilities in the Common Vulnerabilities and Exposures (CVE) naming convention and that use the Open Vulnerability Assessment Language (OVAL) to determine/test for the presence of vulnerabilities. Suggested sources for vulnerability information include the Common Weakness Enumeration (CWE) listing and the National Vulnerability Database (NVD). In addition, security control assessments such as red team exercises provide other sources of potential vulnerabilities for which to scan. Organizations also consider using tools that express vulnerability impact by the
Common Vulnerability Scoring System (CVSS). Related controls: CA-2, CA-7, CM-4, CM-6, RA-2, RA-3, SA-11, SI-2.
References: NIST Special Publications 800-40, 800-70, 800-115; Web: http://cwe.mitre.org, http://nvd.nist.gov. |
link |
18 |
FedRAMP_High_R4 |
SI-2 |
FedRAMP_High_R4_SI-2 |
FedRAMP High SI-2 |
System And Information Integrity |
Flaw Remediation |
Shared |
n/a |
The organization:
a. Identifies, reports, and corrects information system flaws;
b. Tests software and firmware updates related to flaw remediation for effectiveness and potential side effects before installation;
c. Installs security-relevant software and firmware updates within [Assignment: organization- defined time period] of the release of the updates; and
d. Incorporates flaw remediation into the organizational configuration management process.
Supplemental Guidance: Organizations identify information systems affected by announced software flaws including potential vulnerabilities resulting from those flaws, and report this information to designated organizational personnel with information security responsibilities. Security-relevant software updates include, for example, patches, service packs, hot fixes, and anti-virus signatures. Organizations also address flaws discovered during security assessments, continuous monitoring, incident response activities, and system error handling. Organizations take advantage of available resources such as the Common Weakness Enumeration (CWE) or Common Vulnerabilities and Exposures (CVE) databases in remediating flaws discovered in organizational information systems. By incorporating flaw remediation into ongoing configuration management processes, required/anticipated remediation actions can be tracked and verified. Flaw remediation actions that can be tracked and verified include, for example, determining whether organizations follow US-CERT guidance and Information Assurance Vulnerability Alerts. Organization-defined time periods for updating security-relevant software and firmware may vary based on a variety of factors including, for example, the security category of the information system or the criticality of the update (i.e., severity of the vulnerability related to the discovered flaw). Some types of flaw remediation may require more testing than other types. Organizations determine the degree and type of testing needed for the specific type of flaw remediation activity under consideration and also the types of changes that are to be configuration-managed. In some situations, organizations may determine that the testing of software and/or firmware updates is not necessary or practical,
for example, when implementing simple anti-virus signature updates. Organizations may also consider in testing decisions, whether security-relevant software or firmware updates are obtained from authorized sources with appropriate digital signatures. Related controls: CA-2, CA-7, CM-3, CM-5, CM-8, MA-2, IR-4, RA-5, SA-10, SA-11, SI-11. |
link |
15 |
FedRAMP_High_R4 |
SI-4 |
FedRAMP_High_R4_SI-4 |
FedRAMP High SI-4 |
System And Information Integrity |
Information System Monitoring |
Shared |
n/a |
The organization:
a. Monitors the information system to detect:
1. Attacks and indicators of potential attacks in accordance with [Assignment: organization- defined monitoring objectives]; and
2. Unauthorized local, network, and remote connections;
b. Identifies unauthorized use of the information system through [Assignment: organization- defined techniques and methods];
c. Deploys monitoring devices: (i) strategically within the information system to collect organization-determined essential information; and (ii) at ad hoc locations within the system to track specific types of transactions of interest to the organization;
d. Protects information obtained from intrusion-monitoring tools from unauthorized access, modification, and deletion;
e. Heightens the level of information system monitoring activity whenever there is an indication of increased risk to organizational operations and assets, individuals, other organizations, or the Nation based on law enforcement information, intelligence information, or other credible sources of information;
f. Obtains legal opinion with regard to information system monitoring activities in accordance with applicable federal laws, Executive Orders, directives, policies, or regulations; and
g. Provides [Assignment: or ganization-defined information system monitoring information] to [Assignment: organization-defined personnel or roles] [Selection (one or more): as needed; [Assignment: organization-defined frequency]].
Supplemental Guidance: Information system monitoring includes external and internal monitoring. External monitoring includes the observation of events occurring at the information system boundary (i.e., part of perimeter defense and boundary protection). Internal monitoring includes the observation of events occurring within the information system. Organizations can monitor information systems, for example, by observing audit activities in real time or by observing other system aspects such as access patterns, characteristics of access, and other actions. The monitoring objectives may guide determination of the events. Information system monitoring capability is achieved through a variety of tools and techniques (e.g., intrusion detection systems, intrusion prevention systems, malicious code protection software, scanning tools, audit record monitoring software, network monitoring software). Strategic locations for monitoring devices include, for example, selected perimeter locations and near server farms supporting critical applications, with such devices typically being employed at the managed interfaces associated with controls SC-7 and AC-17. Einstein network monitoring devices from the Department of Homeland Security can also be included as monitoring devices. The granularity of monitoring information collected is based on organizational monitoring objectives and the capability of information systems to support such objectives. Specific types of transactions of interest include, for example, Hyper Text Transfer Protocol (HTTP) traffic that bypasses HTTP proxies. Information system monitoring is an integral part of organizational continuous monitoring and incident response programs. Output from system monitoring serves as input to continuous monitoring and incident response programs. A network connection is any connection with a device that communicates through a network (e.g., local area network, Internet). A remote connection is any connection with a device communicating through an external network (e.g., the Internet). Local, network, and remote connections can be either wired or wireless. Related controls: AC-3, AC-4, AC-8, AC-17, AU-2, AU-6, AU-7, AU-9, AU-12, CA-7, IR-4, PE-3, RA-5, SC-7, SC-26, SC-35, SI-3, SI-7.
References: NIST Special Publications 800-61, 800-83, 800-92, 800-94, 800-137. |
link |
22 |
FedRAMP_Moderate_R4 |
AC-2(12) |
FedRAMP_Moderate_R4_AC-2(12) |
FedRAMP Moderate AC-2 (12) |
Access Control |
Account Monitoring / Atypical Usage |
Shared |
n/a |
The organization:
(a) Monitors information system accounts for [Assignment: organization-defined atypical use]; and
(b) Reports atypical usage of information system accounts to [Assignment: organization-defined personnel or roles].
Supplemental Guidance: Atypical usage includes, for example, accessing information systems at certain times of the day and from locations that are not consistent with the normal usage patterns of individuals working in organizations. Related control: CA-7. |
link |
13 |
FedRAMP_Moderate_R4 |
AU-12 |
FedRAMP_Moderate_R4_AU-12 |
FedRAMP Moderate AU-12 |
Audit And Accountability |
Audit Generation |
Shared |
n/a |
The information system:
a. Provides audit record generation capability for the auditable events defined in AU-2 a. at [Assignment: organization-defined information system components];
b. Allows [Assignment: organization-defined personnel or roles] to select which auditable events are to be audited by specific components of the information system; and
c. Generates audit records for the events defined in AU-2 d. with the content defined in AU-3.
Supplemental Guidance: Audit records can be generated from many different information system components. The list of audited events is the set of events for which audits are to be generated. These events are typically a subset of all events for which the information system is capable of generating audit records. Related controls: AC-3, AU-2, AU-3, AU-6, AU-7.
References: None. |
link |
34 |
FedRAMP_Moderate_R4 |
AU-6 |
FedRAMP_Moderate_R4_AU-6 |
FedRAMP Moderate AU-6 |
Audit And Accountability |
Audit Review, Analysis, And Reporting |
Shared |
n/a |
The organization:
a. Reviews and analyzes information system audit records [Assignment: organization-defined frequency] for indications of [Assignment: organization-defined inappropriate or unusual activity]; and
b. Reports findings to [Assignment: organization-defined personnel or roles].
Supplemental Guidance: Audit review, analysis, and reporting covers information security-related auditing performed by organizations including, for example, auditing that results from monitoring of account usage, remote access, wireless connectivity, mobile device connection, configuration settings, system component inventory, use of maintenance tools and nonlocal maintenance, physical access, temperature and humidity, equipment delivery and removal, communications at the information system boundaries, use of mobile code, and use of VoIP. Findings can be reported to organizational entities that include, for example, incident response team, help desk, information security group/department. If organizations are prohibited from reviewing and analyzing audit information or unable to conduct such activities (e.g., in certain national security applications or systems), the review/analysis may be carried out by other organizations granted such authority. Related controls: AC-2, AC-3, AC-6, AC-17, AT-3, AU-7, AU-16, CA-7, CM-5, CM-10, CM-11, IA-3, IA-5, IR-5, IR-6, MA-4, MP-4, PE-3, PE-6, PE-14, PE-16, RA-5, SC-7, SC-18, SC-19, SI-3, SI-4, SI-7.
References: None. |
link |
25 |
FedRAMP_Moderate_R4 |
IR-4 |
FedRAMP_Moderate_R4_IR-4 |
FedRAMP Moderate IR-4 |
Incident Response |
Incident Handling |
Shared |
n/a |
The organization:
a. Implements an incident handling capability for security incidents that includes preparation, detection and analysis, containment, eradication, and recovery;
b. Coordinates incident handling activities with contingency planning activities; and
c. Incorporates lessons learned from ongoing incident handling activities into incident response procedures, training, and testing/exercises, and implements the resulting changes accordingly.
Supplemental Guidance: Organizations recognize that incident response capability is dependent on the capabilities of organizational information systems and the mission/business processes being supported by those systems. Therefore, organizations consider incident response as part of the definition, design, and development of mission/business processes and information systems. Incident-related information can be obtained from a variety of sources including, for example, audit monitoring, network monitoring, physical access monitoring, user/administrator reports, and reported supply chain events. Effective incident handling capability includes coordination among many organizational entities including, for example, mission/business owners, information system owners, authorizing officials, human resources offices, physical and personnel security offices, legal departments, operations personnel, procurement offices, and the risk executive (function). Related controls: AU-6, CM-6, CP-2, CP-4, IR-2, IR-3, IR-8, PE-6, SC-5, SC-7, SI-3, SI-4, SI-7.
References: Executive Order 13587; NIST Special Publication 800-61. |
link |
24 |
FedRAMP_Moderate_R4 |
IR-5 |
FedRAMP_Moderate_R4_IR-5 |
FedRAMP Moderate IR-5 |
Incident Response |
Incident Monitoring |
Shared |
n/a |
The organization tracks and documents information system security incidents.
Supplemental Guidance: Documenting information system security incidents includes, for example, maintaining records about each incident, the status of the incident, and other pertinent information necessary for forensics, evaluating incident details, trends, and handling. Incident information can be obtained from a variety of sources including, for example, incident reports, incident response teams, audit monitoring, network monitoring, physical access monitoring, and user/administrator reports. Related controls: AU-6, IR-8, PE-6, SC-5, SC-7, SI-3, SI-4, SI-7.
References: NIST Special Publication 800-61. |
link |
13 |
FedRAMP_Moderate_R4 |
RA-5 |
FedRAMP_Moderate_R4_RA-5 |
FedRAMP Moderate RA-5 |
Risk Assessment |
Vulnerability Scanning |
Shared |
n/a |
The organization:
a. Scans for vulnerabilities in the information system and hosted applications [Assignment: organization-defined frequency and/or randomly in accordance with organization-defined process] and when new vulnerabilities potentially affecting the system/applications are identified and reported;
b. Employs vulnerability scanning tools and techniques that facilitate interoperability among tools and automate parts of the vulnerability management process by using standards for:
1. Enumerating platforms, software flaws, and improper configurations;
2. Formatting checklists and test procedures; and
3. Measuring vulnerability impact;
c. Analyzes vulnerability scan reports and results from security control assessments;
d. Remediates legitimate vulnerabilities [Assignment: organization-defined response times], in accordance with an organizational assessment of risk; and
e. Shares information obtained from the vulnerability scanning process and security control assessments with [Assignment: organization-defined personnel or roles] to help eliminate similar vulnerabilities in other information systems (i.e., systemic weaknesses or deficiencies).
Supplemental Guidance: Security categorization of information systems guides the frequency and comprehensiveness of vulnerability scans. Organizations determine the required vulnerability scanning for all information system components, ensuring that potential sources of vulnerabilities such as networked printers, scanners, and copiers are not overlooked. Vulnerability analyses for custom software applications may require additional approaches such as static analysis, dynamic analysis, binary analysis, or a hybrid of the three approaches. Organizations can employ these analysis approaches in a variety of tools (e.g., web-based application scanners, static analysis tools, binary analyzers) and in source code reviews. Vulnerability scanning includes, for example: (i) scanning for patch levels; (ii) scanning for functions, ports, protocols, and services that should not be accessible to users or devices; and (iii) scanning for improperly configured or incorrectly operating information flow control mechanisms. Organizations consider using tools that express vulnerabilities in the Common Vulnerabilities and Exposures (CVE) naming convention and that use the Open Vulnerability Assessment Language (OVAL) to determine/test for the presence of vulnerabilities. Suggested sources for vulnerability information include the Common Weakness Enumeration (CWE) listing and the National Vulnerability Database (NVD). In addition, security control assessments such as red team exercises provide other sources of potential vulnerabilities for which to scan. Organizations also consider using tools that express vulnerability impact by the
Common Vulnerability Scoring System (CVSS). Related controls: CA-2, CA-7, CM-4, CM-6, RA-2, RA-3, SA-11, SI-2.
References: NIST Special Publications 800-40, 800-70, 800-115; Web: http://cwe.mitre.org, http://nvd.nist.gov. |
link |
18 |
FedRAMP_Moderate_R4 |
SI-2 |
FedRAMP_Moderate_R4_SI-2 |
FedRAMP Moderate SI-2 |
System And Information Integrity |
Flaw Remediation |
Shared |
n/a |
The organization:
a. Identifies, reports, and corrects information system flaws;
b. Tests software and firmware updates related to flaw remediation for effectiveness and potential side effects before installation;
c. Installs security-relevant software and firmware updates within [Assignment: organization- defined time period] of the release of the updates; and
d. Incorporates flaw remediation into the organizational configuration management process.
Supplemental Guidance: Organizations identify information systems affected by announced software flaws including potential vulnerabilities resulting from those flaws, and report this information to designated organizational personnel with information security responsibilities. Security-relevant software updates include, for example, patches, service packs, hot fixes, and anti-virus signatures. Organizations also address flaws discovered during security assessments, continuous monitoring, incident response activities, and system error handling. Organizations take advantage of available resources such as the Common Weakness Enumeration (CWE) or Common Vulnerabilities and Exposures (CVE) databases in remediating flaws discovered in organizational information systems. By incorporating flaw remediation into ongoing configuration management processes, required/anticipated remediation actions can be tracked and verified. Flaw remediation actions that can be tracked and verified include, for example, determining whether organizations follow US-CERT guidance and Information Assurance Vulnerability Alerts. Organization-defined time periods for updating security-relevant software and firmware may vary based on a variety of factors including, for example, the security category of the information system or the criticality of the update (i.e., severity of the vulnerability related to the discovered flaw). Some types of flaw remediation may require more testing than other types. Organizations determine the degree and type of testing needed for the specific type of flaw remediation activity under consideration and also the types of changes that are to be configuration-managed. In some situations, organizations may determine that the testing of software and/or firmware updates is not necessary or practical,
for example, when implementing simple anti-virus signature updates. Organizations may also consider in testing decisions, whether security-relevant software or firmware updates are obtained from authorized sources with appropriate digital signatures. Related controls: CA-2, CA-7, CM-3, CM-5, CM-8, MA-2, IR-4, RA-5, SA-10, SA-11, SI-11. |
link |
15 |
FedRAMP_Moderate_R4 |
SI-4 |
FedRAMP_Moderate_R4_SI-4 |
FedRAMP Moderate SI-4 |
System And Information Integrity |
Information System Monitoring |
Shared |
n/a |
The organization:
a. Monitors the information system to detect:
1. Attacks and indicators of potential attacks in accordance with [Assignment: organization- defined monitoring objectives]; and
2. Unauthorized local, network, and remote connections;
b. Identifies unauthorized use of the information system through [Assignment: organization- defined techniques and methods];
c. Deploys monitoring devices: (i) strategically within the information system to collect organization-determined essential information; and (ii) at ad hoc locations within the system to track specific types of transactions of interest to the organization;
d. Protects information obtained from intrusion-monitoring tools from unauthorized access, modification, and deletion;
e. Heightens the level of information system monitoring activity whenever there is an indication of increased risk to organizational operations and assets, individuals, other organizations, or the Nation based on law enforcement information, intelligence information, or other credible sources of information;
f. Obtains legal opinion with regard to information system monitoring activities in accordance with applicable federal laws, Executive Orders, directives, policies, or regulations; and
g. Provides [Assignment: or ganization-defined information system monitoring information] to [Assignment: organization-defined personnel or roles] [Selection (one or more): as needed; [Assignment: organization-defined frequency]].
Supplemental Guidance: Information system monitoring includes external and internal monitoring. External monitoring includes the observation of events occurring at the information system boundary (i.e., part of perimeter defense and boundary protection). Internal monitoring includes the observation of events occurring within the information system. Organizations can monitor information systems, for example, by observing audit activities in real time or by observing other system aspects such as access patterns, characteristics of access, and other actions. The monitoring objectives may guide determination of the events. Information system monitoring capability is achieved through a variety of tools and techniques (e.g., intrusion detection systems, intrusion prevention systems, malicious code protection software, scanning tools, audit record monitoring software, network monitoring software). Strategic locations for monitoring devices include, for example, selected perimeter locations and near server farms supporting critical applications, with such devices typically being employed at the managed interfaces associated with controls SC-7 and AC-17. Einstein network monitoring devices from the Department of Homeland Security can also be included as monitoring devices. The granularity of monitoring information collected is based on organizational monitoring objectives and the capability of information systems to support such objectives. Specific types of transactions of interest include, for example, Hyper Text Transfer Protocol (HTTP) traffic that bypasses HTTP proxies. Information system monitoring is an integral part of organizational continuous monitoring and incident response programs. Output from system monitoring serves as input to continuous monitoring and incident response programs. A network connection is any connection with a device that communicates through a network (e.g., local area network, Internet). A remote connection is any connection with a device communicating through an external network (e.g., the Internet). Local, network, and remote connections can be either wired or wireless. Related controls: AC-3, AC-4, AC-8, AC-17, AU-2, AU-6, AU-7, AU-9, AU-12, CA-7, IR-4, PE-3, RA-5, SC-7, SC-26, SC-35, SI-3, SI-7.
References: NIST Special Publications 800-61, 800-83, 800-92, 800-94, 800-137. |
link |
22 |
FFIEC_CAT_2017 |
1.2.3 |
FFIEC_CAT_2017_1.2.3 |
FFIEC CAT 2017 1.2.3 |
Cyber Risk Management and Oversight |
Audit |
Shared |
n/a |
- Independent audit or review evaluates policies, procedures, and controls across the institution for significant risks and control issues associated with the institution's operations, including risks in new products, emerging technologies, and information systems.
- The independent audit function validates controls related to the storage or transmission of confidential data.
- Logging practices are independently reviewed periodically to ensure appropriate log management (e.g., access controls, retention, and maintenance).
- Issues and corrective actions from internal audits and independent testing/assessments are formally tracked to ensure procedures and control lapses are resolved in a timely manner. |
|
13 |
FFIEC_CAT_2017 |
3.1.1 |
FFIEC_CAT_2017_3.1.1 |
FFIEC CAT 2017 3.1.1 |
Cybersecurity Controls |
Infrastructure Management |
Shared |
n/a |
- Network perimeter defense tools (e.g., border router and firewall) are used.
- Systems that are accessed from the Internet or by external parties are protected by firewalls or other similar devices.
- All ports are monitored.
- Up to date antivirus and anti-malware tools are used.
- Systems configurations (for servers, desktops, routers, etc.) follow industry standards and are enforced.
- Ports, functions, protocols and services are prohibited if no longer needed for business purposes.
- Access to make changes to systems configurations (including virtual machines and hypervisors) is controlled and monitored.
- Programs that can override system, object, network, virtual machine, and application controls are restricted.
- System sessions are locked after a pre-defined period of inactivity and are terminated after pre-defined conditions are met.
- Wireless network environments require security settings with strong encryption for authentication and transmission. (*N/A if there are no wireless networks.) |
|
72 |
FFIEC_CAT_2017 |
3.2.2 |
FFIEC_CAT_2017_3.2.2 |
FFIEC CAT 2017 3.2.2 |
Cybersecurity Controls |
Anomalous Activity Detection |
Shared |
n/a |
- The institution is able to detect anomalous activities through monitoring across the environment.
- Customer transactions generating anomalous activity alerts are monitored and reviewed.
- Logs of physical and/or logical access are reviewed following events.
- Access to critical systems by third parties is monitored for unauthorized or unusual activity.
- Elevated privileges are monitored. |
|
27 |
FFIEC_CAT_2017 |
3.2.3 |
FFIEC_CAT_2017_3.2.3 |
FFIEC CAT 2017 3.2.3 |
Cybersecurity Controls |
Event Detection |
Shared |
n/a |
- A normal network activity baseline is established.
- Mechanisms (e.g., antivirus alerts, log event alerts) are in place to alert management to potential attacks.
- Processes are in place to monitor for the presence of unauthorized users, devices, connections, and software.
- Responsibilities for monitoring and reporting suspicious systems activity have been assigned.
- The physical environment is monitored to detect potential unauthorized access. |
|
35 |
FFIEC_CAT_2017 |
4.1.1 |
FFIEC_CAT_2017_4.1.1 |
FFIEC CAT 2017 4.1.1 |
External Dependency Management |
Connections |
Shared |
n/a |
- The critical business processes that are dependent on external connectivity have been identified.
- The institution ensures that third-party connections are authorized.
- A network diagram is in place and identifies all external connections.
- Data flow diagrams are in place and document information flow to external parties. |
|
43 |
HITRUST_CSF_v11.3 |
01.m |
HITRUST_CSF_v11.3_01.m |
HITRUST CSF v11.3 01.m |
Network Access Control |
To ensure segregation in networks. |
Shared |
Security gateways, internal network perimeters, wireless network segregation, firewalls, and logical network domains with controlled data flows to be implemented to enhance network security. |
Groups of information services, users, and information systems should be segregated on networks. |
|
48 |
HITRUST_CSF_v11.3 |
01.n |
HITRUST_CSF_v11.3_01.n |
HITRUST CSF v11.3 01.n |
Network Access Control |
To prevent unauthorised access to shared networks. |
Shared |
Default deny policy at managed interfaces, restricted user connections through network gateways, comprehensive access controls, time-based restrictions, and encryption of sensitive information transmitted over public networks for is to be implemented for enhanced security. |
For shared networks, especially those extending across the organization’s boundaries, the capability of users to connect to the network shall be restricted, in line with the access control policy and requirements of the business applications. |
|
55 |
HITRUST_CSF_v11.3 |
09.j |
HITRUST_CSF_v11.3_09.j |
HITRUST CSF v11.3 09.j |
Protection Against Malicious and Mobile Code |
To ensure that integrity of information and software is protected from malicious or unauthorized code |
Shared |
1. Technologies are to be implemented for timely installation, upgrade and renewal of anti-malware protective measures.
2. Automatic periodic scans of information systems is to be implemented.
3. Anti-malware software that offers a centralized infrastructure that compiles information on file reputations is to be implemented.
4. Post-malicious code update, signature deployment, scanning files, email, and web traffic is to be verified by automated systems, while BYOD users require anti-malware, network-based malware detection is to be used on servers without host-based solutions use.
5. Anti-malware audit logs checks to be performed.
6. Protection against malicious code is to be based on malicious code detection and repair software, security awareness, appropriate system access, and change management controls. |
Detection, prevention, and recovery controls shall be implemented to protect against malicious code, and appropriate user awareness procedures on malicious code shall be provided. |
|
37 |
ISO_IEC_27001_2022 |
10.2 |
ISO_IEC_27001_2022_10.2 |
ISO IEC 27001 2022 10.2 |
Improvement |
Nonconformity and corrective action |
Shared |
1. When a nonconformity occurs, the organization shall:
a. react to the nonconformity, and as applicable:
i. take action to control and correct it;
ii. deal with the consequences;
b. evaluate the need for action to eliminate the causes of nonconformity, in order that it does not recur or occur elsewhere, by:
i. reviewing the nonconformity;
ii. determining the causes of the nonconformity; and
iii. determining if similar nonconformities exist, or could potentially occur;
c. implement any action needed;
d. review the effectiveness of any corrective action taken; and
e. make changes to the information security management system, if necessary.
2. Corrective actions shall be appropriate to the effects of the nonconformities encountered.
3. Documented information shall be available as evidence of:
a. the nature of the nonconformities and any subsequent actions taken,
b. the results of any corrective action. |
Specifies the actions that the organisation shall take in cases of nonconformity. |
|
18 |
ISO_IEC_27001_2022 |
7.5.3 |
ISO_IEC_27001_2022_7.5.3 |
ISO IEC 27001 2022 7.5.3 |
Support |
Control of documented information |
Shared |
1. Documented information required by the information security management system and by this document shall be controlled to ensure:
a. it is available and suitable for use, where and when it is needed; and
b. it is adequately protected (e.g. from loss of confidentiality, improper use, or loss of integrity).
2. For the control of documented information, the organization shall address the following activities, as applicable:
a. distribution, access, retrieval and use;
b. storage and preservation, including the preservation of legibility;
c. control of changes (e.g. version control); and
d. retention and disposition. |
Specifies that the documented information of external origin, determined by the organization to be necessary for the planning and operation of the information security management system, shall be identified as appropriate, and controlled |
|
32 |
ISO_IEC_27001_2022 |
9.1 |
ISO_IEC_27001_2022_9.1 |
ISO IEC 27001 2022 9.1 |
Performance Evaluation |
Monitoring, measurement, analysis and evaluation |
Shared |
1. The organization shall determine:
a. what needs to be monitored and measured, including information security processes and controls;
b. the methods for monitoring, measurement, analysis and evaluation, as applicable, to ensure valid results. The methods selected should produce comparable and reproducible results to be considered valid;
c. when the monitoring and measuring shall be performed;
d. who shall monitor and measure;
e. when the results from monitoring and measurement shall be analysed and evaluated;
f. who shall analyse and evaluate these results.
2. Documented information shall be available as evidence of the results. |
Specifies that the organisation must evaluate information security performance and the effectiveness of the information security management system. |
|
44 |
ISO_IEC_27001_2022 |
9.3.3 |
ISO_IEC_27001_2022_9.3.3 |
ISO IEC 27001 2022 9.3.3 |
Internal Audit |
Management Review Results |
Shared |
The results of the management review shall include decisions related to continual improvement opportunities and any needs for changes to the information security management system. |
Specifies the considertions that the management review results shall include. |
|
16 |
ISO_IEC_27002_2022 |
8.7 |
ISO_IEC_27002_2022_8.7 |
ISO IEC 27002 2022 8.7 |
Identifying,
Protection,
Preventive Control |
Protection against malware |
Shared |
Protection against malware should be implemented and supported by appropriate user awareness.
|
To ensure information and other associated assets are protected against malware. |
|
19 |
New_Zealand_ISM |
07.1.7.C.02 |
New_Zealand_ISM_07.1.7.C.02 |
New_Zealand_ISM_07.1.7.C.02 |
07. Information Security Incidents |
07.1.7.C.02 Preventing and detecting information security incidents |
|
n/a |
Agencies SHOULD develop, implement and maintain tools and procedures covering the detection of potential information security incidents, incorporating: user awareness and training; counter-measures against malicious code, known attack methods and types; intrusion detection strategies; data egress monitoring & control; access control anomalies; audit analysis; system integrity checking; and vulnerability assessments. |
|
16 |
NIST_CSF_v2.0 |
GV.SC_07 |
NIST_CSF_v2.0_GV.SC_07 |
NIST CSF v2.0 GV.SC 07 |
GOVERN-Cybersecurity Supply Chain Risk Management |
The risks posed by a supplier, their products and services, and other third parties are understood, recorded, prioritized, assessed, responded to, and monitored over the course of the relationship. |
Shared |
n/a |
To establish, communicate, and monitor the risk management strategy, expectations, and policy. |
|
17 |
NIST_SP_800-171_R2_3 |
.11.2 |
NIST_SP_800-171_R2_3.11.2 |
NIST SP 800-171 R2 3.11.2 |
Risk Assessment |
Scan for vulnerabilities in organizational systems and applications periodically and when new vulnerabilities affecting those systems and applications are identified. |
Shared |
Microsoft and the customer share responsibilities for implementing this requirement. |
Organizations determine the required vulnerability scanning for all system components, ensuring that potential sources of vulnerabilities such as networked printers, scanners, and copiers are not overlooked. The vulnerabilities to be scanned are readily updated as new vulnerabilities are discovered, announced, and scanning methods developed. This process ensures that potential vulnerabilities in the system are identified and addressed as quickly as possible. Vulnerability analyses for custom software applications may require additional approaches such as static analysis, dynamic analysis, binary analysis, or a hybrid of the three approaches. Organizations can employ these analysis approaches in source code reviews and in a variety of tools (e.g., static analysis tools, web-based application scanners, binary analyzers) and in source code reviews. Vulnerability scanning includes: scanning for patch levels; scanning for functions, ports, protocols, and services that should not be accessible to users or devices; and scanning for improperly configured or incorrectly operating information flow control mechanisms. To facilitate interoperability, organizations consider using products that are Security Content Automated Protocol (SCAP)-validated, scanning tools that express vulnerabilities in the Common Vulnerabilities and Exposures (CVE) naming convention, and that employ the Open Vulnerability Assessment Language (OVAL) to determine the presence of system vulnerabilities. Sources for vulnerability information include the Common Weakness Enumeration (CWE) listing and the National Vulnerability Database (NVD). Security assessments, such as red team exercises, provide additional sources of potential vulnerabilities for which to scan. Organizations also consider using scanning tools that express vulnerability impact by the Common Vulnerability Scoring System (CVSS). In certain situations, the nature of the vulnerability scanning may be more intrusive or the system component that is the subject of the scanning may contain highly sensitive information. Privileged access authorization to selected system components facilitates thorough vulnerability scanning and protects the sensitive nature of such scanning. [SP 800-40] provides guidance on vulnerability management. |
link |
19 |
NIST_SP_800-171_R2_3 |
.11.3 |
NIST_SP_800-171_R2_3.11.3 |
NIST SP 800-171 R2 3.11.3 |
Risk Assessment |
Remediate vulnerabilities in accordance with risk assessments. |
Shared |
Microsoft and the customer share responsibilities for implementing this requirement. |
Vulnerabilities discovered, for example, via the scanning conducted in response to 3.11.2, are remediated with consideration of the related assessment of risk. The consideration of risk influences the prioritization of remediation efforts and the level of effort to be expended in the remediation for specific vulnerabilities. |
link |
18 |
NIST_SP_800-171_R2_3 |
.14.1 |
NIST_SP_800-171_R2_3.14.1 |
NIST SP 800-171 R2 3.14.1 |
System and Information Integrity |
Identify, report, and correct system flaws in a timely manner. |
Shared |
Microsoft and the customer share responsibilities for implementing this requirement. |
Organizations identify systems that are affected by announced software and firmware flaws including potential vulnerabilities resulting from those flaws and report this information to designated personnel with information security responsibilities. Security-relevant updates include patches, service packs, hot fixes, and anti-virus signatures. Organizations address flaws discovered during security assessments, continuous monitoring, incident response activities, and system error handling. Organizations can take advantage of available resources such as the Common Weakness Enumeration (CWE) database or Common Vulnerabilities and Exposures (CVE) database in remediating flaws discovered in organizational systems. Organization-defined time periods for updating security-relevant software and firmware may vary based on a variety of factors including the criticality of the update (i.e., severity of the vulnerability related to the discovered flaw). Some types of flaw remediation may require more testing than other types of remediation. [SP 800-40] provides guidance on patch management technologies. |
link |
17 |
NIST_SP_800-171_R2_3 |
.14.2 |
NIST_SP_800-171_R2_3.14.2 |
NIST SP 800-171 R2 3.14.2 |
System and Information Integrity |
Provide protection from malicious code at designated locations within organizational systems. |
Shared |
Microsoft and the customer share responsibilities for implementing this requirement. |
Designated locations include system entry and exit points which may include firewalls, remote-access servers, workstations, electronic mail servers, web servers, proxy servers, notebook computers, and mobile devices. Malicious code includes viruses, worms, Trojan horses, and spyware. Malicious code can be encoded in various formats (e.g., UUENCODE, Unicode), contained within compressed or hidden files, or hidden in files using techniques such as steganography. Malicious code can be inserted into systems in a variety of ways including web accesses, electronic mail, electronic mail attachments, and portable storage devices. Malicious code insertions occur through the exploitation of system vulnerabilities. Malicious code protection mechanisms include anti-virus signature definitions and reputation-based technologies. A variety of technologies and methods exist to limit or eliminate the effects of malicious code. Pervasive configuration management and comprehensive software integrity controls may be effective in preventing execution of unauthorized code. In addition to commercial off-the-shelf software, malicious code may also be present in custom-built software. This could include logic bombs, back doors, and other types of cyber-attacks that could affect organizational missions/business functions. Traditional malicious code protection mechanisms cannot always detect such code. In these situations, organizations rely instead on other safeguards including secure coding practices, configuration management and control, trusted procurement processes, and monitoring practices to help ensure that software does not perform functions other than the functions intended. [SP 800-83] provides guidance on malware incident prevention. |
link |
18 |
NIST_SP_800-171_R2_3 |
.14.3 |
NIST_SP_800-171_R2_3.14.3 |
NIST SP 800-171 R2 3.14.3 |
System and Information Integrity |
Monitor system security alerts and advisories and take action in response. |
Shared |
Microsoft and the customer share responsibilities for implementing this requirement. |
There are many publicly available sources of system security alerts and advisories. For example, the Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (CISA) generates security alerts and advisories to maintain situational awareness across the federal government and in nonfederal organizations. Software vendors, subscription services, and industry information sharing and analysis centers (ISACs) may also provide security alerts and advisories. Examples of response actions include notifying relevant external organizations, for example, external mission/business partners, supply chain partners, external service providers, and peer or supporting organizations. [SP 800-161] provides guidance on supply chain risk management. |
link |
14 |
NIST_SP_800-171_R2_3 |
.14.6 |
NIST_SP_800-171_R2_3.14.6 |
NIST SP 800-171 R2 3.14.6 |
System and Information Integrity |
Monitor organizational systems, including inbound and outbound communications traffic, to detect attacks and indicators of potential attacks. |
Shared |
Microsoft and the customer share responsibilities for implementing this requirement. |
System monitoring includes external and internal monitoring. External monitoring includes the observation of events occurring at the system boundary (i.e., part of perimeter defense and boundary protection). Internal monitoring includes the observation of events occurring within the system. Organizations can monitor systems, for example, by observing audit record activities in real time or by observing other system aspects such as access patterns, characteristics of access, and other actions. The monitoring objectives may guide determination of the events. System monitoring capability is achieved through a variety of tools and techniques (e.g., intrusion detection systems, intrusion prevention systems, malicious code protection software, scanning tools, audit record monitoring software, network monitoring software). Strategic locations for monitoring devices include selected perimeter locations and near server farms supporting critical applications, with such devices being employed at managed system interfaces. The granularity of monitoring information collected is based on organizational monitoring objectives and the capability of systems to support such objectives. System monitoring is an integral part of continuous monitoring and incident response programs. Output from system monitoring serves as input to continuous monitoring and incident response programs. A network connection is any connection with a device that communicates through a network (e.g., local area network, Internet). A remote connection is any connection with a device communicating through an external network (e.g., the Internet). Local, network, and remote connections can be either wired or wireless. Unusual or unauthorized activities or conditions related to inbound/outbound communications traffic include internal traffic that indicates the presence of malicious code in systems or propagating among system components, the unauthorized exporting of information, or signaling to external systems. Evidence of malicious code is used to identify potentially compromised systems or system components. System monitoring requirements, including the need for specific types of system monitoring, may be referenced in other requirements. [SP 800-94] provides guidance on intrusion detection and prevention systems. |
link |
27 |
NIST_SP_800-171_R2_3 |
.14.7 |
NIST_SP_800-171_R2_3.14.7 |
NIST SP 800-171 R2 3.14.7 |
System and Information Integrity |
Identify unauthorized use of organizational systems. |
Shared |
Microsoft and the customer share responsibilities for implementing this requirement. |
System monitoring includes external and internal monitoring. System monitoring can detect unauthorized use of organizational systems. System monitoring is an integral part of continuous monitoring and incident response programs. Monitoring is achieved through a variety of tools and techniques (e.g., intrusion detection systems, intrusion prevention systems, malicious code protection software, scanning tools, audit record monitoring software, network monitoring software). Output from system monitoring serves as input to continuous monitoring and incident response programs. Unusual/unauthorized activities or conditions related to inbound and outbound communications traffic include internal traffic that indicates the presence of malicious code in systems or propagating among system components, the unauthorized exporting of information, or signaling to external systems. Evidence of malicious code is used to identify potentially compromised systems or system components. System monitoring requirements, including the need for specific types of system monitoring, may be referenced in other requirements. [SP 800-94] provides guidance on intrusion detection and prevention systems. |
link |
20 |
NIST_SP_800-171_R2_3 |
.3.1 |
NIST_SP_800-171_R2_3.3.1 |
NIST SP 800-171 R2 3.3.1 |
Audit and Accountability |
Create and retain system audit logs and records to the extent needed to enable the monitoring, analysis, investigation, and reporting of unlawful or unauthorized system activity |
Shared |
Microsoft and the customer share responsibilities for implementing this requirement. |
An event is any observable occurrence in a system, which includes unlawful or unauthorized system activity. Organizations identify event types for which a logging functionality is needed as those events which are significant and relevant to the security of systems and the environments in which those systems operate to meet specific and ongoing auditing needs. Event types can include password changes, failed logons or failed accesses related to systems, administrative privilege usage, or third-party credential usage. In determining event types that require logging, organizations consider the monitoring and auditing appropriate for each of the CUI security requirements. Monitoring and auditing requirements can be balanced with other system needs. For example, organizations may determine that systems must have the capability to log every file access both successful and unsuccessful, but not activate that capability except for specific circumstances due to the potential burden on system performance. Audit records can be generated at various levels of abstraction, including at the packet level as information traverses the network. Selecting the appropriate level of abstraction is a critical aspect of an audit logging capability and can facilitate the identification of root causes to problems. Organizations consider in the definition of event types, the logging necessary to cover related events such as the steps in distributed, transaction-based processes (e.g., processes that are distributed across multiple organizations) and actions that occur in service-oriented or cloud-based architectures. Audit record content that may be necessary to satisfy this requirement includes time stamps, source and destination addresses, user or process identifiers, event descriptions, success or fail indications, filenames involved, and access control or flow control rules invoked. Event outcomes can include indicators of event success or failure and event-specific results (e.g., the security state of the system after the event occurred). Detailed information that organizations may consider in audit records includes full text recording of privileged commands or the individual identities of group account users. Organizations consider limiting the additional audit log information to only that information explicitly needed for specific audit requirements. This facilitates the use of audit trails and audit logs by not including information that could potentially be misleading or could make it more difficult to locate information of interest. Audit logs are reviewed and analyzed as often as needed to provide important information to organizations to facilitate risk-based decision making. [SP 800-92] provides guidance on security log management. |
link |
50 |
NIST_SP_800-171_R2_3 |
.3.2 |
NIST_SP_800-171_R2_3.3.2 |
NIST SP 800-171 R2 3.3.2 |
Audit and Accountability |
Ensure that the actions of individual system users can be uniquely traced to those users, so they can be held accountable for their actions. |
Shared |
Microsoft and the customer share responsibilities for implementing this requirement. |
This requirement ensures that the contents of the audit record include the information needed to link the audit event to the actions of an individual to the extent feasible. Organizations consider logging for traceability including results from monitoring of account usage, remote access, wireless connectivity, mobile device connection, communications at system boundaries, configuration settings, physical access, nonlocal maintenance, use of maintenance tools, temperature and humidity, equipment delivery and removal, system component inventory, use of mobile code, and use of Voice over Internet Protocol (VoIP). |
link |
36 |
NIST_SP_800-171_R2_3 |
.3.4 |
NIST_SP_800-171_R2_3.3.4 |
NIST SP 800-171 R2 3.3.4 |
Audit and Accountability |
Alert in the event of an audit logging process failure. |
Shared |
Microsoft and the customer share responsibilities for implementing this requirement. |
Audit logging process failures include software and hardware errors, failures in the audit record capturing mechanisms, and audit record storage capacity being reached or exceeded. This requirement applies to each audit record data storage repository (i.e., distinct system component where audit records are stored), the total audit record storage capacity of organizations (i.e., all audit record data storage repositories combined), or both. |
link |
12 |
NIST_SP_800-171_R2_3 |
.3.5 |
NIST_SP_800-171_R2_3.3.5 |
NIST SP 800-171 R2 3.3.5 |
Audit and Accountability |
Correlate audit record review, analysis, and reporting processes for investigation and response to indications of unlawful, unauthorized, suspicious, or unusual activity. |
Shared |
Microsoft and the customer share responsibilities for implementing this requirement. |
Correlating audit record review, analysis, and reporting processes helps to ensure that they do not operate independently, but rather collectively. Regarding the assessment of a given organizational system, the requirement is agnostic as to whether this correlation is applied at the system level or at the organization level across all systems. |
link |
13 |
NIST_SP_800-171_R3_3 |
.12.3 |
NIST_SP_800-171_R3_3.12.3 |
NIST 800-171 R3 3.12.3 |
Security Assessment Control |
Continuous Monitoring |
Shared |
Continuous monitoring at the system level facilitates ongoing awareness of the system security posture to support risk management decisions. The terms continuous and ongoing imply that organizations assess and monitor their systems at a frequency that is sufficient to support risk based decisions. Different types of security requirements may require different monitoring frequencies. |
Continuous monitoring at the system level facilitates ongoing awareness of the system security posture to support risk management decisions. The terms continuous and ongoing imply that organizations assess and monitor their systems at a frequency that is sufficient to support risk based decisions. Different types of security requirements may require different monitoring frequencies. |
|
17 |
NIST_SP_800-171_R3_3 |
.13.1 |
NIST_SP_800-171_R3_3.13.1 |
NIST 800-171 R3 3.13.1 |
System and Communications Protection Control |
Boundary Protection |
Shared |
Managed interfaces include gateways, routers, firewalls, network-based malicious code analysis, virtualization systems, and encrypted tunnels implemented within a security architecture. Subnetworks that are either physically or logically separated from internal networks are referred to as demilitarized zones or DMZs. Restricting or prohibiting interfaces within organizational systems includes restricting external web traffic to designated web servers within managed interfaces, prohibiting external traffic that appears to be spoofing internal addresses, and prohibiting internal traffic that appears to be spoofing external addresses. |
a. Monitor and control communications at the external managed interfaces to the system and at key internal managed interfaces within the system.
b. Implement subnetworks for publicly accessible system components that are physically or logically separated from internal networks.
c. Connect to external systems only through managed interfaces consisting of boundary protection devices arranged in accordance with an organizational security architecture. |
|
43 |
NIST_SP_800-171_R3_3 |
.14.2 |
NIST_SP_800-171_R3_3.14.2 |
NIST 800-171 R3 3.14.2 |
System and Information Integrity Control |
Malicious Code Protection |
Shared |
Malicious code insertions occur through the exploitation of system vulnerabilities. Periodic scans of the system and real-time scans of files from external sources as files are downloaded, opened, or executed can detect malicious code. Malicious code can be inserted into the system in many ways, including by email, the Internet, and portable storage devices. Malicious code includes viruses, worms, Trojan horses, and spyware. Malicious code can be encoded in various formats, contained in compressed or hidden files, or hidden in files using techniques such as steganography. In addition to the above technologies, pervasive configuration management, comprehensive software integrity controls, and anti-exploitation software may be effective in preventing the execution of unauthorized code. Malicious code may be present in commercial off-the-shelf software and custom-built software and could include logic bombs, backdoors, and other types of attacks that could affect organizational mission and business functions.
If malicious code cannot be detected by detection methods or technologies, organizations can rely on secure coding practices, configuration management and control, trusted procurement processes, and monitoring practices to help ensure that the software only performs intended functions. Organizations may determine that different actions are warranted in response to the detection of malicious code. For example, organizations can define actions to be taken in response to malicious code detection during scans, the detection of malicious downloads, or the detection of maliciousness when attempting to open or execute files. |
a. Implement malicious code protection mechanisms at designated locations within the system to detect and eradicate malicious code.
b. Update malicious code protection mechanisms as new releases are available in accordance with configuration management policy and procedures.
c. Configure malicious code protection mechanisms to:
1. Perform scans of the system [Assignment: organization-defined frequency] and real-time scans of files from external sources at endpoints or network entry and exit points as the files are downloaded, opened, or executed; and
2. Block malicious code, quarantine malicious code, or take other actions in response to malicious code detection. |
|
19 |
NIST_SP_800-171_R3_3 |
.4.3 |
NIST_SP_800-171_R3_3.4.3 |
404 not found |
|
|
|
n/a |
n/a |
|
16 |
NIST_SP_800-53_R4 |
AC-2(12) |
NIST_SP_800-53_R4_AC-2(12) |
NIST SP 800-53 Rev. 4 AC-2 (12) |
Access Control |
Account Monitoring / Atypical Usage |
Shared |
n/a |
The organization:
(a) Monitors information system accounts for [Assignment: organization-defined atypical use]; and
(b) Reports atypical usage of information system accounts to [Assignment: organization-defined personnel or roles].
Supplemental Guidance: Atypical usage includes, for example, accessing information systems at certain times of the day and from locations that are not consistent with the normal usage patterns of individuals working in organizations. Related control: CA-7. |
link |
13 |
NIST_SP_800-53_R4 |
AU-12 |
NIST_SP_800-53_R4_AU-12 |
NIST SP 800-53 Rev. 4 AU-12 |
Audit And Accountability |
Audit Generation |
Shared |
n/a |
The information system:
a. Provides audit record generation capability for the auditable events defined in AU-2 a. at [Assignment: organization-defined information system components];
b. Allows [Assignment: organization-defined personnel or roles] to select which auditable events are to be audited by specific components of the information system; and
c. Generates audit records for the events defined in AU-2 d. with the content defined in AU-3.
Supplemental Guidance: Audit records can be generated from many different information system components. The list of audited events is the set of events for which audits are to be generated. These events are typically a subset of all events for which the information system is capable of generating audit records. Related controls: AC-3, AU-2, AU-3, AU-6, AU-7.
References: None. |
link |
34 |
NIST_SP_800-53_R4 |
AU-12(1) |
NIST_SP_800-53_R4_AU-12(1) |
NIST SP 800-53 Rev. 4 AU-12 (1) |
Audit And Accountability |
System-Wide / Time-Correlated Audit Trail |
Shared |
n/a |
The information system compiles audit records from [Assignment: organization-defined information system components] into a system-wide (logical or physical) audit trail that is time- correlated to within [Assignment: organization-defined level of tolerance for relationship between time stamps of individual records in the audit trail].
Supplemental Guidance: Audit trails are time-correlated if the time stamps in the individual audit records can be reliably related to the time stamps in other audit records to achieve a time ordering of the records within organizational tolerances. Related controls: AU-8, AU-12. |
link |
31 |
NIST_SP_800-53_R4 |
AU-6 |
NIST_SP_800-53_R4_AU-6 |
NIST SP 800-53 Rev. 4 AU-6 |
Audit And Accountability |
Audit Review, Analysis, And Reporting |
Shared |
n/a |
The organization:
a. Reviews and analyzes information system audit records [Assignment: organization-defined frequency] for indications of [Assignment: organization-defined inappropriate or unusual activity]; and
b. Reports findings to [Assignment: organization-defined personnel or roles].
Supplemental Guidance: Audit review, analysis, and reporting covers information security-related auditing performed by organizations including, for example, auditing that results from monitoring of account usage, remote access, wireless connectivity, mobile device connection, configuration settings, system component inventory, use of maintenance tools and nonlocal maintenance, physical access, temperature and humidity, equipment delivery and removal, communications at the information system boundaries, use of mobile code, and use of VoIP. Findings can be reported to organizational entities that include, for example, incident response team, help desk, information security group/department. If organizations are prohibited from reviewing and analyzing audit information or unable to conduct such activities (e.g., in certain national security applications or systems), the review/analysis may be carried out by other organizations granted such authority. Related controls: AC-2, AC-3, AC-6, AC-17, AT-3, AU-7, AU-16, CA-7, CM-5, CM-10, CM-11, IA-3, IA-5, IR-5, IR-6, MA-4, MP-4, PE-3, PE-6, PE-14, PE-16, RA-5, SC-7, SC-18, SC-19, SI-3, SI-4, SI-7.
References: None. |
link |
25 |
NIST_SP_800-53_R4 |
AU-6(4) |
NIST_SP_800-53_R4_AU-6(4) |
NIST SP 800-53 Rev. 4 AU-6 (4) |
Audit And Accountability |
Central Review And Analysis |
Shared |
n/a |
The information system provides the capability to centrally review and analyze audit records from multiple components within the system.
Supplemental Guidance: Automated mechanisms for centralized reviews and analyses include, for example, Security Information Management products. Related controls: AU-2, AU-12. |
link |
30 |
NIST_SP_800-53_R4 |
AU-6(5) |
NIST_SP_800-53_R4_AU-6(5) |
NIST SP 800-53 Rev. 4 AU-6 (5) |
Audit And Accountability |
Integration / Scanning And Monitoring Capabilities |
Shared |
n/a |
The organization integrates analysis of audit records with analysis of [Selection (one or more): vulnerability scanning information; performance data; information system monitoring information; [Assignment: organization-defined data/information collected from other sources]] to further enhance the ability to identify inappropriate or unusual activity.
Supplemental Guidance: This control enhancement does not require vulnerability scanning, the generation of performance data, or information system monitoring. Rather, the enhancement requires that the analysis of information being otherwise produced in these areas is integrated with the analysis of audit information. Security Event and Information Management System tools can facilitate audit record aggregation/consolidation from multiple information system components as well as audit record correlation and analysis. The use of standardized audit record analysis scripts developed by organizations (with localized script adjustments, as necessary) provides more cost-effective approaches for analyzing audit record information collected. The correlation of audit record information with vulnerability scanning information is important in determining the veracity of vulnerability scans and correlating attack detection events with scanning results. Correlation with performance data can help uncover denial of service attacks or cyber attacks resulting in unauthorized use of resources. Correlation with system monitoring information can assist in uncovering attacks and in better relating audit information to operational situations. Related controls: AU-12, IR-4, RA-5. |
link |
31 |
NIST_SP_800-53_R4 |
IR-4 |
NIST_SP_800-53_R4_IR-4 |
NIST SP 800-53 Rev. 4 IR-4 |
Incident Response |
Incident Handling |
Shared |
n/a |
The organization:
a. Implements an incident handling capability for security incidents that includes preparation, detection and analysis, containment, eradication, and recovery;
b. Coordinates incident handling activities with contingency planning activities; and
c. Incorporates lessons learned from ongoing incident handling activities into incident response procedures, training, and testing/exercises, and implements the resulting changes accordingly.
Supplemental Guidance: Organizations recognize that incident response capability is dependent on the capabilities of organizational information systems and the mission/business processes being supported by those systems. Therefore, organizations consider incident response as part of the definition, design, and development of mission/business processes and information systems. Incident-related information can be obtained from a variety of sources including, for example, audit monitoring, network monitoring, physical access monitoring, user/administrator reports, and reported supply chain events. Effective incident handling capability includes coordination among many organizational entities including, for example, mission/business owners, information system owners, authorizing officials, human resources offices, physical and personnel security offices, legal departments, operations personnel, procurement offices, and the risk executive (function). Related controls: AU-6, CM-6, CP-2, CP-4, IR-2, IR-3, IR-8, PE-6, SC-5, SC-7, SI-3, SI-4, SI-7.
References: Executive Order 13587; NIST Special Publication 800-61. |
link |
24 |
NIST_SP_800-53_R4 |
IR-5 |
NIST_SP_800-53_R4_IR-5 |
NIST SP 800-53 Rev. 4 IR-5 |
Incident Response |
Incident Monitoring |
Shared |
n/a |
The organization tracks and documents information system security incidents.
Supplemental Guidance: Documenting information system security incidents includes, for example, maintaining records about each incident, the status of the incident, and other pertinent information necessary for forensics, evaluating incident details, trends, and handling. Incident information can be obtained from a variety of sources including, for example, incident reports, incident response teams, audit monitoring, network monitoring, physical access monitoring, and user/administrator reports. Related controls: AU-6, IR-8, PE-6, SC-5, SC-7, SI-3, SI-4, SI-7.
References: NIST Special Publication 800-61. |
link |
13 |
NIST_SP_800-53_R4 |
RA-5 |
NIST_SP_800-53_R4_RA-5 |
NIST SP 800-53 Rev. 4 RA-5 |
Risk Assessment |
Vulnerability Scanning |
Shared |
n/a |
The organization:
a. Scans for vulnerabilities in the information system and hosted applications [Assignment: organization-defined frequency and/or randomly in accordance with organization-defined process] and when new vulnerabilities potentially affecting the system/applications are identified and reported;
b. Employs vulnerability scanning tools and techniques that facilitate interoperability among tools and automate parts of the vulnerability management process by using standards for:
1. Enumerating platforms, software flaws, and improper configurations;
2. Formatting checklists and test procedures; and
3. Measuring vulnerability impact;
c. Analyzes vulnerability scan reports and results from security control assessments;
d. Remediates legitimate vulnerabilities [Assignment: organization-defined response times], in accordance with an organizational assessment of risk; and
e. Shares information obtained from the vulnerability scanning process and security control assessments with [Assignment: organization-defined personnel or roles] to help eliminate similar vulnerabilities in other information systems (i.e., systemic weaknesses or deficiencies).
Supplemental Guidance: Security categorization of information systems guides the frequency and comprehensiveness of vulnerability scans. Organizations determine the required vulnerability scanning for all information system components, ensuring that potential sources of vulnerabilities such as networked printers, scanners, and copiers are not overlooked. Vulnerability analyses for custom software applications may require additional approaches such as static analysis, dynamic analysis, binary analysis, or a hybrid of the three approaches. Organizations can employ these analysis approaches in a variety of tools (e.g., web-based application scanners, static analysis tools, binary analyzers) and in source code reviews. Vulnerability scanning includes, for example: (i) scanning for patch levels; (ii) scanning for functions, ports, protocols, and services that should not be accessible to users or devices; and (iii) scanning for improperly configured or incorrectly operating information flow control mechanisms. Organizations consider using tools that express vulnerabilities in the Common Vulnerabilities and Exposures (CVE) naming convention and that use the Open Vulnerability Assessment Language (OVAL) to determine/test for the presence of vulnerabilities. Suggested sources for vulnerability information include the Common Weakness Enumeration (CWE) listing and the National Vulnerability Database (NVD). In addition, security control assessments such as red team exercises provide other sources of potential vulnerabilities for which to scan. Organizations also consider using tools that express vulnerability impact by the
Common Vulnerability Scoring System (CVSS). Related controls: CA-2, CA-7, CM-4, CM-6, RA-2, RA-3, SA-11, SI-2.
References: NIST Special Publications 800-40, 800-70, 800-115; Web: http://cwe.mitre.org, http://nvd.nist.gov. |
link |
18 |
NIST_SP_800-53_R4 |
SI-2 |
NIST_SP_800-53_R4_SI-2 |
NIST SP 800-53 Rev. 4 SI-2 |
System And Information Integrity |
Flaw Remediation |
Shared |
n/a |
The organization:
a. Identifies, reports, and corrects information system flaws;
b. Tests software and firmware updates related to flaw remediation for effectiveness and potential side effects before installation;
c. Installs security-relevant software and firmware updates within [Assignment: organization- defined time period] of the release of the updates; and
d. Incorporates flaw remediation into the organizational configuration management process.
Supplemental Guidance: Organizations identify information systems affected by announced software flaws including potential vulnerabilities resulting from those flaws, and report this information to designated organizational personnel with information security responsibilities. Security-relevant software updates include, for example, patches, service packs, hot fixes, and anti-virus signatures. Organizations also address flaws discovered during security assessments, continuous monitoring, incident response activities, and system error handling. Organizations take advantage of available resources such as the Common Weakness Enumeration (CWE) or Common Vulnerabilities and Exposures (CVE) databases in remediating flaws discovered in organizational information systems. By incorporating flaw remediation into ongoing configuration management processes, required/anticipated remediation actions can be tracked and verified. Flaw remediation actions that can be tracked and verified include, for example, determining whether organizations follow US-CERT guidance and Information Assurance Vulnerability Alerts. Organization-defined time periods for updating security-relevant software and firmware may vary based on a variety of factors including, for example, the security category of the information system or the criticality of the update (i.e., severity of the vulnerability related to the discovered flaw). Some types of flaw remediation may require more testing than other types. Organizations determine the degree and type of testing needed for the specific type of flaw remediation activity under consideration and also the types of changes that are to be configuration-managed. In some situations, organizations may determine that the testing of software and/or firmware updates is not necessary or practical,
for example, when implementing simple anti-virus signature updates. Organizations may also consider in testing decisions, whether security-relevant software or firmware updates are obtained from authorized sources with appropriate digital signatures. Related controls: CA-2, CA-7, CM-3, CM-5, CM-8, MA-2, IR-4, RA-5, SA-10, SA-11, SI-11. |
link |
15 |
NIST_SP_800-53_R4 |
SI-4 |
NIST_SP_800-53_R4_SI-4 |
NIST SP 800-53 Rev. 4 SI-4 |
System And Information Integrity |
Information System Monitoring |
Shared |
n/a |
The organization:
a. Monitors the information system to detect:
1. Attacks and indicators of potential attacks in accordance with [Assignment: organization- defined monitoring objectives]; and
2. Unauthorized local, network, and remote connections;
b. Identifies unauthorized use of the information system through [Assignment: organization- defined techniques and methods];
c. Deploys monitoring devices: (i) strategically within the information system to collect organization-determined essential information; and (ii) at ad hoc locations within the system to track specific types of transactions of interest to the organization;
d. Protects information obtained from intrusion-monitoring tools from unauthorized access, modification, and deletion;
e. Heightens the level of information system monitoring activity whenever there is an indication of increased risk to organizational operations and assets, individuals, other organizations, or the Nation based on law enforcement information, intelligence information, or other credible sources of information;
f. Obtains legal opinion with regard to information system monitoring activities in accordance with applicable federal laws, Executive Orders, directives, policies, or regulations; and
g. Provides [Assignment: or ganization-defined information system monitoring information] to [Assignment: organization-defined personnel or roles] [Selection (one or more): as needed; [Assignment: organization-defined frequency]].
Supplemental Guidance: Information system monitoring includes external and internal monitoring. External monitoring includes the observation of events occurring at the information system boundary (i.e., part of perimeter defense and boundary protection). Internal monitoring includes the observation of events occurring within the information system. Organizations can monitor information systems, for example, by observing audit activities in real time or by observing other system aspects such as access patterns, characteristics of access, and other actions. The monitoring objectives may guide determination of the events. Information system monitoring capability is achieved through a variety of tools and techniques (e.g., intrusion detection systems, intrusion prevention systems, malicious code protection software, scanning tools, audit record monitoring software, network monitoring software). Strategic locations for monitoring devices include, for example, selected perimeter locations and near server farms supporting critical applications, with such devices typically being employed at the managed interfaces associated with controls SC-7 and AC-17. Einstein network monitoring devices from the Department of Homeland Security can also be included as monitoring devices. The granularity of monitoring information collected is based on organizational monitoring objectives and the capability of information systems to support such objectives. Specific types of transactions of interest include, for example, Hyper Text Transfer Protocol (HTTP) traffic that bypasses HTTP proxies. Information system monitoring is an integral part of organizational continuous monitoring and incident response programs. Output from system monitoring serves as input to continuous monitoring and incident response programs. A network connection is any connection with a device that communicates through a network (e.g., local area network, Internet). A remote connection is any connection with a device communicating through an external network (e.g., the Internet). Local, network, and remote connections can be either wired or wireless. Related controls: AC-3, AC-4, AC-8, AC-17, AU-2, AU-6, AU-7, AU-9, AU-12, CA-7, IR-4, PE-3, RA-5, SC-7, SC-26, SC-35, SI-3, SI-7.
References: NIST Special Publications 800-61, 800-83, 800-92, 800-94, 800-137. |
link |
22 |
NIST_SP_800-53_R5.1.1 |
CA.7 |
NIST_SP_800-53_R5.1.1_CA.7 |
NIST SP 800-53 R5.1.1 CA.7 |
Assessment, Authorization and Monitoring Control |
Continuous Monitoring |
Shared |
Develop a system-level continuous monitoring strategy and implement continuous monitoring in accordance with the organization-level continuous monitoring strategy that includes:
a. Establishing the following system-level metrics to be monitored: [Assignment: organization-defined system-level metrics];
b. Establishing [Assignment: organization-defined frequencies] for monitoring and [Assignment: organization-defined frequencies] for assessment of control effectiveness;
c. Ongoing control assessments in accordance with the continuous monitoring strategy;
d. Ongoing monitoring of system and organization-defined metrics in accordance with the continuous monitoring strategy;
e. Correlation and analysis of information generated by control assessments and monitoring;
f. Response actions to address results of the analysis of control assessment and monitoring information; and
g. Reporting the security and privacy status of the system to [Assignment: organization-defined personnel or roles]
[Assignment: organization-defined frequency]. |
Continuous monitoring at the system level facilitates ongoing awareness of the system security and privacy posture to support organizational risk management decisions. The terms “continuous” and “ongoing” imply that organizations assess and monitor their controls and risks at a frequency sufficient to support risk-based decisions. Different types of controls may require different monitoring frequencies. The results of continuous monitoring generate risk response actions by organizations. When monitoring the effectiveness of multiple controls that have been grouped into capabilities, a root-cause analysis may be needed to determine the specific control that has failed. Continuous monitoring programs allow organizations to maintain the authorizations of systems and common controls in highly dynamic environments of operation with changing mission and business needs, threats, vulnerabilities, and technologies. Having access to security and privacy information on a continuing basis through reports and dashboards gives organizational officials the ability to make effective and timely risk management decisions, including ongoing authorization decisions.
Automation supports more frequent updates to hardware, software, and firmware inventories, authorization packages, and other system information. Effectiveness is further enhanced when continuous monitoring outputs are formatted to provide information that is specific, measurable, actionable, relevant, and timely. Continuous monitoring activities are scaled in accordance with the security categories of systems. Monitoring requirements, including the need for specific monitoring, may be referenced in other controls and control enhancements, such as AC-2g, AC-2(7), AC-2(12)(a), AC-2(7)(b), AC-2(7)(c), AC-17(1), AT-4a, AU-13, AU-13(1), AU-13(2), CM-3f, CM-6d, CM-11c, IR-5, MA-2b, MA-3a, MA-4a, PE-3d, PE-6, PE-14b, PE-16, PE-20, PM-6, PM-23, PM-31, PS-7e, SA-9c, SR-4, SC-5(3)(b), SC-7a, SC-7(24)(b), SC-18c, SC-43b, and SI-4. |
|
17 |
NIST_SP_800-53_R5.1.1 |
CA.7.4 |
NIST_SP_800-53_R5.1.1_CA.7.4 |
NIST SP 800-53 R5.1.1 CA.7.4 |
Assessment, Authorization and Monitoring Control |
Continuous Monitoring | Risk Monitoring |
Shared |
Ensure risk monitoring is an integral part of the continuous monitoring strategy that includes the following:
(a) Effectiveness monitoring;
(b) Compliance monitoring; and
(c) Change monitoring. |
Risk monitoring is informed by the established organizational risk tolerance. Effectiveness monitoring determines the ongoing effectiveness of the implemented risk response measures. Compliance monitoring verifies that required risk response measures are implemented. It also verifies that security and privacy requirements are satisfied. Change monitoring identifies changes to organizational systems and environments of operation that may affect security and privacy risk. |
|
14 |
NIST_SP_800-53_R5.1.1 |
SC.7 |
NIST_SP_800-53_R5.1.1_SC.7 |
NIST SP 800-53 R5.1.1 SC.7 |
System and Communications Protection |
Boundary Protection |
Shared |
a. Monitor and control communications at the external managed interfaces to the system and at key internal managed interfaces within the system;
b. Implement subnetworks for publicly accessible system components that are [Selection: physically; logically] separated from internal organizational networks; and
c. Connect to external networks or systems only through managed interfaces consisting of boundary protection devices arranged in accordance with an organizational security and privacy architecture. |
Managed interfaces include gateways, routers, firewalls, guards, network-based malicious code analysis, virtualization systems, or encrypted tunnels implemented within a security architecture. Subnetworks that are physically or logically separated from internal networks are referred to as demilitarized zones or DMZs. Restricting or prohibiting interfaces within organizational systems includes restricting external web traffic to designated web servers within managed interfaces, prohibiting external traffic that appears to be spoofing internal addresses, and prohibiting internal traffic that appears to be spoofing external addresses. Commercial telecommunications services are provided by network components and consolidated management systems shared by customers. These services may also include third party-provided access lines and other service elements. Such services may represent sources of increased risk despite contract security provisions. Boundary protection may be implemented as a common control for all or part of an organizational network such that the boundary to be protected is greater than a system-specific boundary (i.e., an authorization boundary). |
|
43 |
NIST_SP_800-53_R5.1.1 |
SI.3 |
NIST_SP_800-53_R5.1.1_SI.3 |
NIST SP 800-53 R5.1.1 SI.3 |
System and Information Integrity Control |
Malicious Code Protection |
Shared |
a. Implement [Selection (one or more): signature based; non-signature based] malicious code protection mechanisms at system entry and exit points to detect and eradicate malicious code;
b. Automatically update malicious code protection mechanisms as new releases are available in accordance with organizational configuration management policy and procedures;
c. Configure malicious code protection mechanisms to:
1. Perform periodic scans of the system [Assignment: organization-defined frequency] and real-time scans of files from external sources at [Selection (one or more): endpoint; network entry and exit points] as the files are downloaded, opened, or executed in accordance with organizational policy; and
2. [Selection (one or more): block malicious code; quarantine malicious code; take [Assignment: organization-defined action]
]; and send alert to [Assignment: organization-defined personnel or roles] in response to malicious code detection; and
d. Address the receipt of false positives during malicious code detection and eradication and the resulting potential impact on the availability of the system. |
System entry and exit points include firewalls, remote access servers, workstations, electronic mail servers, web servers, proxy servers, notebook computers, and mobile devices. Malicious code includes viruses, worms, Trojan horses, and spyware. Malicious code can also be encoded in various formats contained within compressed or hidden files or hidden in files using techniques such as steganography. Malicious code can be inserted into systems in a variety of ways, including by electronic mail, the world-wide web, and portable storage devices. Malicious code insertions occur through the exploitation of system vulnerabilities. A variety of technologies and methods exist to limit or eliminate the effects of malicious code.
Malicious code protection mechanisms include both signature- and nonsignature-based technologies. Nonsignature-based detection mechanisms include artificial intelligence techniques that use heuristics to detect, analyze, and describe the characteristics or behavior of malicious code and to provide controls against such code for which signatures do not yet exist or for which existing signatures may not be effective. Malicious code for which active signatures do not yet exist or may be ineffective includes polymorphic malicious code (i.e., code that changes signatures when it replicates). Nonsignature-based mechanisms also include reputation-based technologies. In addition to the above technologies, pervasive configuration management, comprehensive software integrity controls, and anti-exploitation software may be effective in preventing the execution of unauthorized code. Malicious code may be present in commercial off-the-shelf software as well as custom-built software and could include logic bombs, backdoors, and other types of attacks that could affect organizational mission and business functions.
In situations where malicious code cannot be detected by detection methods or technologies, organizations rely on other types of controls, including secure coding practices, configuration management and control, trusted procurement processes, and monitoring practices to ensure that software does not perform functions other than the functions intended. Organizations may determine that, in response to the detection of malicious code, different actions may be warranted. For example, organizations can define actions in response to malicious code detection during periodic scans, the detection of malicious downloads, or the detection of maliciousness when attempting to open or execute files. |
|
19 |
NIST_SP_800-53_R5 |
AC-2(12) |
NIST_SP_800-53_R5_AC-2(12) |
NIST SP 800-53 Rev. 5 AC-2 (12) |
Access Control |
Account Monitoring for Atypical Usage |
Shared |
n/a |
(a) Monitor system accounts for [Assignment: organization-defined atypical usage]; and
(b) Report atypical usage of system accounts to [Assignment: organization-defined personnel or roles]. |
link |
13 |
NIST_SP_800-53_R5 |
AU-12 |
NIST_SP_800-53_R5_AU-12 |
NIST SP 800-53 Rev. 5 AU-12 |
Audit and Accountability |
Audit Record Generation |
Shared |
n/a |
a. Provide audit record generation capability for the event types the system is capable of auditing as defined in [AU-2a](#au-2_smt.a) on [Assignment: organization-defined system components];
b. Allow [Assignment: organization-defined personnel or roles] to select the event types that are to be logged by specific components of the system; and
c. Generate audit records for the event types defined in [AU-2c](#au-2_smt.c) that include the audit record content defined in [AU-3](#au-3). |
link |
34 |
NIST_SP_800-53_R5 |
AU-12(1) |
NIST_SP_800-53_R5_AU-12(1) |
NIST SP 800-53 Rev. 5 AU-12 (1) |
Audit and Accountability |
System-wide and Time-correlated Audit Trail |
Shared |
n/a |
Compile audit records from [Assignment: organization-defined system components] into a system-wide (logical or physical) audit trail that is time-correlated to within [Assignment: organization-defined level of tolerance for the relationship between time stamps of individual records in the audit trail]. |
link |
31 |
NIST_SP_800-53_R5 |
AU-6 |
NIST_SP_800-53_R5_AU-6 |
NIST SP 800-53 Rev. 5 AU-6 |
Audit and Accountability |
Audit Record Review, Analysis, and Reporting |
Shared |
n/a |
a. Review and analyze system audit records [Assignment: organization-defined frequency] for indications of [Assignment: organization-defined inappropriate or unusual activity] and the potential impact of the inappropriate or unusual activity;
b. Report findings to [Assignment: organization-defined personnel or roles]; and
c. Adjust the level of audit record review, analysis, and reporting within the system when there is a change in risk based on law enforcement information, intelligence information, or other credible sources of information. |
link |
25 |
NIST_SP_800-53_R5 |
AU-6(4) |
NIST_SP_800-53_R5_AU-6(4) |
NIST SP 800-53 Rev. 5 AU-6 (4) |
Audit and Accountability |
Central Review and Analysis |
Shared |
n/a |
Provide and implement the capability to centrally review and analyze audit records from multiple components within the system. |
link |
30 |
NIST_SP_800-53_R5 |
AU-6(5) |
NIST_SP_800-53_R5_AU-6(5) |
NIST SP 800-53 Rev. 5 AU-6 (5) |
Audit and Accountability |
Integrated Analysis of Audit Records |
Shared |
n/a |
Integrate analysis of audit records with analysis of [Selection (OneOrMore): vulnerability scanning information;performance data;system monitoring information; [Assignment: organization-defined data/information collected from other sources] ] to further enhance the ability to identify inappropriate or unusual activity. |
link |
31 |
NIST_SP_800-53_R5 |
IR-4 |
NIST_SP_800-53_R5_IR-4 |
NIST SP 800-53 Rev. 5 IR-4 |
Incident Response |
Incident Handling |
Shared |
n/a |
a. Implement an incident handling capability for incidents that is consistent with the incident response plan and includes preparation, detection and analysis, containment, eradication, and recovery;
b. Coordinate incident handling activities with contingency planning activities;
c. Incorporate lessons learned from ongoing incident handling activities into incident response procedures, training, and testing, and implement the resulting changes accordingly; and
d. Ensure the rigor, intensity, scope, and results of incident handling activities are comparable and predictable across the organization. |
link |
24 |
NIST_SP_800-53_R5 |
IR-5 |
NIST_SP_800-53_R5_IR-5 |
NIST SP 800-53 Rev. 5 IR-5 |
Incident Response |
Incident Monitoring |
Shared |
n/a |
Track and document incidents. |
link |
13 |
NIST_SP_800-53_R5 |
RA-5 |
NIST_SP_800-53_R5_RA-5 |
NIST SP 800-53 Rev. 5 RA-5 |
Risk Assessment |
Vulnerability Monitoring and Scanning |
Shared |
n/a |
a. Monitor and scan for vulnerabilities in the system and hosted applications [Assignment: organization-defined frequency and/or randomly in accordance with organization-defined process] and when new vulnerabilities potentially affecting the system are identified and reported;
b. Employ vulnerability monitoring tools and techniques that facilitate interoperability among tools and automate parts of the vulnerability management process by using standards for:
1. Enumerating platforms, software flaws, and improper configurations;
2. Formatting checklists and test procedures; and
3. Measuring vulnerability impact;
c. Analyze vulnerability scan reports and results from vulnerability monitoring;
d. Remediate legitimate vulnerabilities [Assignment: organization-defined response times] in accordance with an organizational assessment of risk;
e. Share information obtained from the vulnerability monitoring process and control assessments with [Assignment: organization-defined personnel or roles] to help eliminate similar vulnerabilities in other systems; and
f. Employ vulnerability monitoring tools that include the capability to readily update the vulnerabilities to be scanned. |
link |
18 |
NIST_SP_800-53_R5 |
SI-2 |
NIST_SP_800-53_R5_SI-2 |
NIST SP 800-53 Rev. 5 SI-2 |
System and Information Integrity |
Flaw Remediation |
Shared |
n/a |
a. Identify, report, and correct system flaws;
b. Test software and firmware updates related to flaw remediation for effectiveness and potential side effects before installation;
c. Install security-relevant software and firmware updates within [Assignment: organization-defined time period] of the release of the updates; and
d. Incorporate flaw remediation into the organizational configuration management process. |
link |
15 |
NIST_SP_800-53_R5 |
SI-4 |
NIST_SP_800-53_R5_SI-4 |
NIST SP 800-53 Rev. 5 SI-4 |
System and Information Integrity |
System Monitoring |
Shared |
n/a |
a. Monitor the system to detect:
1. Attacks and indicators of potential attacks in accordance with the following monitoring objectives: [Assignment: organization-defined monitoring objectives]; and
2. Unauthorized local, network, and remote connections;
b. Identify unauthorized use of the system through the following techniques and methods: [Assignment: organization-defined techniques and methods];
c. Invoke internal monitoring capabilities or deploy monitoring devices:
1. Strategically within the system to collect organization-determined essential information; and
2. At ad hoc locations within the system to track specific types of transactions of interest to the organization;
d. Analyze detected events and anomalies;
e. Adjust the level of system monitoring activity when there is a change in risk to organizational operations and assets, individuals, other organizations, or the Nation;
f. Obtain legal opinion regarding system monitoring activities; and
g. Provide [Assignment: organization-defined system monitoring information] to [Assignment: organization-defined personnel or roles] [Selection (OneOrMore): as needed; [Assignment: organization-defined frequency] ] . |
link |
22 |
NL_BIO_Cloud_Theme |
C.04.3(2) |
NL_BIO_Cloud_Theme_C.04.3(2) |
NL_BIO_Cloud_Theme_C.04.3(2) |
C.04 Technical Vulnerability Management |
Technical vulnerabilities |
|
n/a |
The malware protection is carried out on various environments, such as on mail servers, (desktop) computers and when accessing the organization's network. The scan for malware includes: all files received over networks or through any form of storage medium, even before use; all attachments and downloads even before use; virtual machines; network traffic. |
|
20 |
NL_BIO_Cloud_Theme |
C.04.6(2) |
NL_BIO_Cloud_Theme_C.04.6(2) |
NL_BIO_Cloud_Theme_C.04.6(2) |
C.04 Technical Vulnerability Management |
Technical vulnerabilities |
|
n/a |
Technical weaknesses can be remedied by performing patch management in a timely manner, which includes: identifying, registering and acquiring patches; the decision-making around the use of patches; testing patches; performing patches; registering implemented patches. |
|
20 |
NL_BIO_Cloud_Theme |
C.04.7(2) |
NL_BIO_Cloud_Theme_C.04.7(2) |
NL_BIO_Cloud_Theme_C.04.7(2) |
C.04 Technical Vulnerability Management |
Evaluated |
|
n/a |
Evaluations of technical vulnerabilities are recorded and reported. |
|
41 |
NL_BIO_Cloud_Theme |
U.09.3(2) |
NL_BIO_Cloud_Theme_U.09.3(2) |
NL_BIO_Cloud_Theme_U.09.3(2) |
U.09 Malware Protection |
Detection, prevention and recovery |
|
n/a |
The malware protection is carried out on various environments, such as on mail servers, (desktop) computers and when accessing the organization's network. The scan for malware includes: all files received over networks or through any form of storage medium, even before use; all attachments and downloads even before use; virtual machines; network traffic. |
|
25 |
NL_BIO_Cloud_Theme |
U.15.1(2) |
NL_BIO_Cloud_Theme_U.15.1(2) |
NL_BIO_Cloud_Theme_U.15.1(2) |
U.15 Logging and monitoring |
Events Logged |
|
n/a |
The malware protection is carried out on various environments, such as on mail servers, (desktop) computers and when accessing the organization's network. The scan for malware includes: all files received over networks or through any form of storage medium, even before use; all attachments and downloads even before use; virtual machines; network traffic. |
|
46 |
NZ_ISM_v3.5 |
ISI-2 |
NZ_ISM_v3.5_ISI-2 |
NZISM Security Benchmark ISI-2 |
Information Security Incidents |
7.1.7 Preventing and detecting information security incidents |
Customer |
n/a |
Processes and procedures for the detection of information security incidents will assist in mitigating attacks using the most common vectors in systems exploits. Automated tools are only as good as their implementation and the level of analysis they perform. If tools are not configured to assess all areas of potential security risk then some vulnerabilities or attacks will not be detected. In addition, if tools are not regularly updated, including updates for new vulnerabilities and attack methods, their effectiveness will be reduced. |
link |
11 |
NZISM_Security_Benchmark_v1.1 |
SS-3 |
NZISM_Security_Benchmark_v1.1_SS-3 |
NZISM Security Benchmark SS-3 |
Software security |
14.1.9 Maintaining hardened SOEs |
Customer |
Agencies SHOULD ensure that for all servers and workstations:
malware detection heuristics are set to a high level;
malware pattern signatures are checked for updates on at least a daily basis;
malware pattern signatures are updated as soon as possible after vendors make them available;
all disks and systems are regularly scanned for malicious code; and
the use of End Point Agents is considered. |
Whilst a SOE can be sufficiently hardened when it is deployed, its security will progressively degrade over time. Agencies can address the degradation of the security of a SOE by ensuring that patches are continually applied, system users are not able to disable or bypass security functionality and antivirus and other security software is appropriately maintained with the latest signatures and updates.
End Point Agents monitor traffic and apply security policies on applications, storage interfaces and data in real-time. Administrators actively block or monitor and log policy breaches. The End Point Agent can also create forensic monitoring to facilitate incident investigation.
End Point Agents can monitor user activity, such as the cut, copy, paste, print, print screen operations and copying data to external drives and other devices. The Agent can then apply policies to limit such activity. |
link |
11 |
NZISM_v3.7 |
14.2.4.C.01. |
NZISM_v3.7_14.2.4.C.01. |
NZISM v3.7 14.2.4.C.01. |
Application Allow listing |
14.2.4.C.01. - To mitigate security risks, and ensure compliance with security policies and standards. |
Shared |
n/a |
Agencies SHOULD implement application allow listing as part of the SOE for workstations, servers and any other network device. |
|
25 |
NZISM_v3.7 |
14.2.5.C.01. |
NZISM_v3.7_14.2.5.C.01. |
NZISM v3.7 14.2.5.C.01. |
Application Allow listing |
14.2.5.C.01. - To mitigate security risks, and ensure compliance with security policies and standards. |
Shared |
n/a |
Agencies MUST ensure that a system user cannot disable the application allow listing mechanism. |
|
16 |
NZISM_v3.7 |
14.2.5.C.02. |
NZISM_v3.7_14.2.5.C.02. |
NZISM v3.7 14.2.5.C.02. |
Application Allow listing |
14.2.5.C.02. - To mitigate security risks, and ensure compliance with security policies and standards. |
Shared |
n/a |
Agencies SHOULD prevent a system user from running arbitrary executables. |
|
16 |
NZISM_v3.7 |
14.2.5.C.03. |
NZISM_v3.7_14.2.5.C.03. |
NZISM v3.7 14.2.5.C.03. |
Application Allow listing |
14.2.5.C.03. - To mitigate security risks, and ensure compliance with security policies and standards. |
Shared |
n/a |
Agencies SHOULD restrict a system user's rights in order to permit them to only execute a specific set of predefined executables as required for them to complete their duties. |
|
16 |
NZISM_v3.7 |
14.2.5.C.04. |
NZISM_v3.7_14.2.5.C.04. |
NZISM v3.7 14.2.5.C.04. |
Application Allow listing |
14.2.5.C.04. - To mitigate security risks, and ensure compliance with security policies and standards. |
Shared |
n/a |
Agencies SHOULD ensure that application allow listing does not replace the antivirus and anti-malware software within a system. |
|
16 |
NZISM_v3.7 |
14.2.6.C.01. |
NZISM_v3.7_14.2.6.C.01. |
NZISM v3.7 14.2.6.C.01. |
Application Allow listing |
14.2.6.C.01. - To mitigate security risks, and ensure compliance with security policies and standards. |
Shared |
n/a |
Agencies SHOULD ensure that system administrators are not automatically exempt from application allow list policy. |
|
16 |
NZISM_v3.7 |
14.2.7.C.01. |
NZISM_v3.7_14.2.7.C.01. |
NZISM v3.7 14.2.7.C.01. |
Application Allow listing |
14.2.7.C.01. - To mitigate security risks, and ensure compliance with security policies and standards. |
Shared |
n/a |
Agencies SHOULD ensure that the default policy is to deny the execution of software. |
|
16 |
NZISM_v3.7 |
14.2.7.C.02. |
NZISM_v3.7_14.2.7.C.02. |
NZISM v3.7 14.2.7.C.02. |
Application Allow listing |
14.2.7.C.02. - To mitigate security risks, and ensure compliance with security policies and standards. |
Shared |
n/a |
Agencies SHOULD ensure that application allow listing is used in addition to a strong access control list model and the use of limited privilege accounts. |
|
16 |
NZISM_v3.7 |
14.3.10.C.01. |
NZISM_v3.7_14.3.10.C.01. |
NZISM v3.7 14.3.10.C.01. |
Web Applications |
14.3.10.C.01. - To maintain control over network traffic and reduces the likelihood of exposure to malicious content or activities. |
Shared |
n/a |
Agencies SHOULD implement allow listing for all HTTP traffic being communicated through their gateways. |
|
24 |
NZISM_v3.7 |
14.3.10.C.02. |
NZISM_v3.7_14.3.10.C.02. |
NZISM v3.7 14.3.10.C.02. |
Web Applications |
14.3.10.C.02. - To maintain control over network traffic and reduces the likelihood of exposure to malicious content or activities. |
Shared |
n/a |
Agencies using an allow list on their gateways to specify the external addresses, to which encrypted connections are permitted, SHOULD specify allow list addresses by domain name or IP address. |
|
23 |
NZISM_v3.7 |
14.3.10.C.03. |
NZISM_v3.7_14.3.10.C.03. |
NZISM v3.7 14.3.10.C.03. |
Web Applications |
14.3.10.C.03. - To maintain control over network traffic and reduces the likelihood of exposure to malicious content or activities. |
Shared |
n/a |
If agencies do not allow list websites they SHOULD deny list websites to prevent access to known malicious websites. |
|
22 |
NZISM_v3.7 |
14.3.10.C.04. |
NZISM_v3.7_14.3.10.C.04. |
NZISM v3.7 14.3.10.C.04. |
Web Applications |
14.3.10.C.04. - To maintain control over network traffic and reduces the likelihood of exposure to malicious content or activities. |
Shared |
n/a |
Agencies deny listing websites SHOULD update the deny list on a frequent basis to ensure that it remains effective. |
|
22 |
NZISM_v3.7 |
17.8.10.C.02. |
NZISM_v3.7_17.8.10.C.02. |
NZISM v3.7 17.8.10.C.02. |
Internet Protocol Security (IPSec) |
17.8.10.C.02. - To enhance overall cybersecurity posture. |
Shared |
n/a |
Agencies choosing to use transport mode SHOULD additionally use an IP tunnel for IPSec connections. |
|
35 |
NZISM_v3.7 |
18.4.10.C.01. |
NZISM_v3.7_18.4.10.C.01. |
NZISM v3.7 18.4.10.C.01. |
Intrusion Detection and Prevention |
18.4.10.C.01. - To ensure user awareness of the policies, and handling outbreaks according to established procedures. |
Shared |
n/a |
Agencies MUST:
1. develop and maintain a set of policies and procedures covering how to:
a.minimise the likelihood of malicious code being introduced into a system;
b. prevent all unauthorised code from executing on an agency network;
c. detect any malicious code installed on a system;
d. make their system users aware of the agency's policies and procedures; and
e. ensure that all instances of detected malicious code outbreaks are handled according to established procedures. |
|
16 |
NZISM_v3.7 |
19.1.10.C.01. |
NZISM_v3.7_19.1.10.C.01. |
NZISM v3.7 19.1.10.C.01. |
Gateways |
19.1.10.C.01. - To ensure that the security requirements are consistently upheld throughout the network hierarchy, from the lowest to the highest networks. |
Shared |
n/a |
When agencies have cascaded connections between networks involving multiple gateways they MUST ensure that the assurance levels specified for network devices between the overall lowest and highest networks are met by the gateway between the highest network and the next highest network within the cascaded connection. |
|
50 |
NZISM_v3.7 |
6.1.9.C.01. |
NZISM_v3.7_6.1.9.C.01. |
NZISM v3.7 6.1.9.C.01. |
Information Security Reviews |
6.1.9.C.01. - To ensure alignment with the vulnerability disclosure policy, and implement adjustments and changes consistent with the findings of vulnerability analysis |
Shared |
n/a |
Agencies SHOULD review the components detailed below. Agencies SHOULD also ensure that any adjustments and changes as a result of any vulnerability analysis are consistent with the vulnerability disclosure policy.
1. Information security documentation - The SecPol, Systems Architecture, SRMPs, SSPs, SitePlan, SOPs, the VDP, the IRP, and any third party assurance reports.
2. Dispensations - Prior to the identified expiry date.
3. Operating environment - When an identified threat emerges or changes, an agency gains or loses a function or the operation of functions are moved to a new physical environment.
4. Procedures - After an information security incident or test exercise.
5. System security - Items that could affect the security of the system on a regular basis.
6. Threats - Changes in threat environment and risk profile.
7. NZISM - Changes to baseline or other controls, any new controls and guidance. |
|
16 |
|
op.exp.6 Protection against harmful code |
op.exp.6 Protection against harmful code |
404 not found |
|
|
|
n/a |
n/a |
|
61 |
PCI_DSS_v4.0.1 |
1.4.4 |
PCI_DSS_v4.0.1_1.4.4 |
PCI DSS v4.0.1 1.4.4 |
Install and Maintain Network Security Controls |
System components that store cardholder data are not directly accessible from untrusted networks |
Shared |
n/a |
Examine the data-flow diagram and network diagram to verify that it is documented that system components storing cardholder data are not directly accessible from the untrusted networks. Examine configurations of NSCs to verify that controls are implemented such that system components storing cardholder data are not directly accessible from untrusted networks |
|
43 |
PCI_DSS_v4.0.1 |
12.4.1 |
PCI_DSS_v4.0.1_12.4.1 |
PCI DSS v4.0.1 12.4.1 |
Support Information Security with Organizational Policies and Programs |
Executive Management Responsibility for PCI DSS |
Shared |
n/a |
Additional requirement for service providers only: Responsibility is established by executive management for the protection of cardholder data and a PCI DSS compliance program to include:
• Overall accountability for maintaining PCI DSS compliance.
• Defining a charter for a PCI DSS compliance program and communication to executive management. |
|
17 |
PCI_DSS_v4.0.1 |
5.2.1 |
PCI_DSS_v4.0.1_5.2.1 |
PCI DSS v4.0.1 5.2.1 |
Protect All Systems and Networks from Malicious Software |
An anti-malware solution(s) is deployed on all system components, except for those system components identified in periodic evaluations per Requirement 5.2.3 that concludes the system components are not at risk from malware |
Shared |
n/a |
Examine system components to verify that an anti-malware solution(s) is deployed on all system components, except for those determined to not be at risk from malware based on periodic evaluations per Requirement 5.2.3. For any system components without an anti-malware solution, examine the periodic evaluations to verify the component was evaluated and the evaluation concludes that the component is not at risk from malware |
|
19 |
PCI_DSS_v4.0.1 |
5.2.2 |
PCI_DSS_v4.0.1_5.2.2 |
PCI DSS v4.0.1 5.2.2 |
Protect All Systems and Networks from Malicious Software |
The deployed anti-malware solution(s) detects all known types of malware and removes, blocks, or contains all known types of malware |
Shared |
n/a |
Examine vendor documentation and configurations of the anti-malware solution(s) to verify that the solution detects all known types of malware and removes, blocks, or contains all known types of malware |
|
19 |
PCI_DSS_v4.0.1 |
5.2.3 |
PCI_DSS_v4.0.1_5.2.3 |
PCI DSS v4.0.1 5.2.3 |
Protect All Systems and Networks from Malicious Software |
Any system components that are not at risk for malware are evaluated periodically to include the following: a documented list of all system components not at risk for malware, identification and evaluation of evolving malware threats for those system components, confirmation whether such system components continue to not require anti-malware protection |
Shared |
n/a |
Examine documented policies and procedures to verify that a process is defined for periodic evaluations of any system components that are not at risk for malware that includes all elements specified in this requirement. Interview personnel to verify that the evaluations include all elements specified in this requirement. Examine the list of system components identified as not at risk of malware and compare to the system components without an anti-malware solution deployed per Requirement 5.2.1 to verify that the system components match for both requirements |
|
19 |
PCI_DSS_v4.0.1 |
5.3.1 |
PCI_DSS_v4.0.1_5.3.1 |
PCI DSS v4.0.1 5.3.1 |
Protect All Systems and Networks from Malicious Software |
The anti-malware solution(s) is kept current via automatic updates |
Shared |
n/a |
Examine anti-malware solution(s) configurations, including any master installation of the software, to verify the solution is configured to perform automatic updates. Examine system components and logs, to verify that the anti-malware solution(s) and definitions are current and have been promptly deployed |
|
19 |
PCI_DSS_v4.0.1 |
5.3.2 |
PCI_DSS_v4.0.1_5.3.2 |
PCI DSS v4.0.1 5.3.2 |
Protect All Systems and Networks from Malicious Software |
The anti-malware solution(s) performs periodic scans and active or real-time scans, or performs continuous behavioral analysis of systems or processes |
Shared |
n/a |
Examine anti-malware solution(s) configurations, including any master installation of the software, to verify the solution(s) is configured to perform at least one of the elements specified in this requirement. Examine system components, including all operating system types identified as at risk for malware, to verify the solution(s) is enabled in accordance with at least one of the elements specified in this requirement. Examine logs and scan results to verify that the solution(s) is enabled in accordance with at least one of the elements specified in this requirement |
|
19 |
PCI_DSS_v4.0.1 |
5.3.3 |
PCI_DSS_v4.0.1_5.3.3 |
PCI DSS v4.0.1 5.3.3 |
Protect All Systems and Networks from Malicious Software |
For removable electronic media, the anti-malware solution(s) performs automatic scans of when the media is inserted, connected, or logically mounted, or performs continuous behavioral analysis of systems or processes when the media is inserted, connected, or logically mounted |
Shared |
n/a |
Examine anti-malware solution(s) configurations to verify that, for removable electronic media, the solution is configured to perform at least one of the elements specified in this requirement. Examine system components with removable electronic media connected to verify that the solution(s) is enabled in accordance with at least one of the elements as specified in this requirement. Examine logs and scan results to verify that the solution(s) is enabled in accordance with at least one of the elements specified in this requirement |
|
19 |
RBI_CSF_Banks_v2016 |
13.2 |
RBI_CSF_Banks_v2016_13.2 |
|
Advanced Real-Timethreat Defenceand Management |
Advanced Real-Timethreat Defenceand Management-13.2 |
|
n/a |
Implement Anti-malware, Antivirus protection including behavioural detection systems for all categories of devices ???(Endpoints such as PCs/laptops/ mobile devices etc.), servers (operating systems, databases, applications, etc.), Web/Internet gateways, email-gateways, Wireless networks, SMS servers etc. including tools and processes for centralised management and monitoring. |
|
17 |
RBI_CSF_Banks_v2016 |
21.1 |
RBI_CSF_Banks_v2016_21.1 |
|
Metrics |
Metrics-21.1 |
|
n/a |
Develop a comprehensive set of metrics that provide for prospective and
retrospective measures, like key performance indicators and key risk indicators |
|
15 |
RBI_CSF_Banks_v2016 |
4.9 |
RBI_CSF_Banks_v2016_4.9 |
|
Network Management And Security |
Security Operation Centre-4.9 |
|
n/a |
Security Operation Centre to monitor the logs of various network activities and should have the capability to escalate any abnormal / undesirable activities. |
|
15 |
RBI_CSF_Banks_v2016 |
7.6 |
RBI_CSF_Banks_v2016_7.6 |
|
Patch/Vulnerability & Change Management |
Patch/Vulnerability & Change Management-7.6 |
|
n/a |
As a threat mitigation strategy, identify the root cause of incident and apply
necessary patches to plug the vulnerabilities. |
|
13 |
RBI_ITF_NBFC_v2017 |
3.1.f |
RBI_ITF_NBFC_v2017_3.1.f |
RBI IT Framework 3.1.f |
Information and Cyber Security |
Maker-checker-3.1 |
|
n/a |
The IS Policy must provide for a IS framework with the following basic tenets:
Maker-checker is one of the important principles of authorization in the information systems of financial entities. For each transaction, there must be at least two individuals necessary for its completion as this will reduce the risk of error and will ensure reliability of information. |
link |
20 |
RMiT_v1.0 |
10.19 |
RMiT_v1.0_10.19 |
RMiT 10.19 |
Cryptography |
Cryptography - 10.19 |
Shared |
n/a |
A financial institution must ensure cryptographic controls are based on the effective implementation of suitable cryptographic protocols. The protocols shall include secret and public cryptographic key protocols, both of which shall reflect a high degree of protection to the applicable secret or private cryptographic keys. The selection of such protocols must be based on recognised international standards and tested accordingly. Commensurate with the level of risk, secret cryptographic key and private-cryptographic key storage and encryption/decryption computation must be undertaken in a protected environment, supported by a hardware security module (HSM) or trusted execution environment (TEE). |
link |
6 |
Sarbanes_Oxley_Act_(1)_2022_1 |
Sarbanes_Oxley_Act_(1)_2022_1 |
Sarbanes_Oxley_Act_(1)_2022_1 |
Sarbanes Oxley Act 2022 1 |
PUBLIC LAW |
Sarbanes Oxley Act 2022 (SOX) |
Shared |
n/a |
n/a |
|
92 |
SOC_2 |
CC7.2 |
SOC_2_CC7.2 |
SOC 2 Type 2 CC7.2 |
System Operations |
Monitor system components for anomalous behavior |
Shared |
The customer is responsible for implementing this recommendation. |
• Implements Detection Policies, Procedures, and Tools — Detection policies and
procedures are defined and implemented and detection tools are implemented on infrastructure and software to identify anomalies in the operation or unusual activity
on systems. Procedures may include (1) a defined governance process for security
event detection and management that includes provision of resources; (2) use of intelligence sources to identify newly discovered threats and vulnerabilities; and (3)
logging of unusual system activities.
• Designs Detection Measures — Detection measures are designed to identify anomalies that could result from actual or attempted (1) compromise of physical barriers;
(2) unauthorized actions of authorized personnel; (3) use of compromised identification and authentication credentials; (4) unauthorized access from outside the system boundaries; (5) compromise of authorized external parties; and (6) implementation or connection of unauthorized hardware and software.
• Implements Filters to Analyze Anomalies — Management has implemented procedures to filter, summarize, and analyze anomalies to identify security events.
• Monitors Detection Tools for Effective Operation — Management has implemented
processes to monitor the effectiveness of detection tools |
|
20 |
SOC_2023 |
CC2.3 |
SOC_2023_CC2.3 |
SOC 2023 CC2.3 |
Information and Communication |
To facilitate effective internal communication. |
Shared |
n/a |
Entity to communicate with external parties regarding matters affecting the functioning of internal control. |
|
218 |
SOC_2023 |
CC5.3 |
SOC_2023_CC5.3 |
SOC 2023 CC5.3 |
Control Activities |
To maintain alignment with organizational objectives and regulatory requirements. |
Shared |
n/a |
Entity deploys control activities through policies that establish what is expected and in procedures that put policies into action by establishing Policies and Procedures to Support Deployment of Management’s Directives, Responsibility and Accountability for Executing Policies and Procedures, perform tasks in a timely manner, taking corrective actions, perform using competent personnel and reassess policies and procedures. |
|
229 |
SOC_2023 |
CC6.1 |
SOC_2023_CC6.1 |
SOC 2023 CC6.1 |
Logical and Physical Access Controls |
To mitigate security events and ensuring the confidentiality, integrity, and availability of critical information assets. |
Shared |
n/a |
Entity implements logical access security software, infrastructure, and architectures over protected information assets to protect them from security events to meet the entity's objectives by identifying and managing the inventory of information assets, restricting logical access, identification and authentication of users, consider network segmentation, manage points of access, restricting access of information assets, managing identification and authentication, managing credentials for infrastructure and software, using encryption to protect data and protect using encryption keys. |
|
128 |
SOC_2023 |
CC6.8 |
SOC_2023_CC6.8 |
SOC 2023 CC6.8 |
Logical and Physical Access Controls |
To mitigate the risk of cybersecurity threats, safeguard critical systems and data, and maintain operational continuity and integrity. |
Shared |
n/a |
Entity implements controls to prevent or detect and act upon the introduction of unauthorized or malicious software to meet the entity’s objectives. |
|
33 |
SOC_2023 |
CC7.2 |
SOC_2023_CC7.2 |
SOC 2023 CC7.2 |
Systems Operations |
To maintain robust security measures and ensure operational resilience. |
Shared |
n/a |
The entity monitors system components and the operation of those components for anomalies that are indicative of malicious acts, natural disasters, and errors affecting the entity's ability to meet its objectives; anomalies are analysed to determine whether they represent security events. |
|
167 |
SOC_2023 |
CC7.4 |
SOC_2023_CC7.4 |
SOC 2023 CC7.4 |
Systems Operations |
To effectively manage security incidents, minimize their impact, and protect assets, operations, and reputation. |
Shared |
n/a |
The entity responds to identified security incidents by:
a. Executing a defined incident-response program to understand, contain, remediate, and communicate security incidents by assigning roles and responsibilities;
b. Establishing procedures to contain security incidents;
c. Mitigating ongoing security incidents, End Threats Posed by Security Incidents;
d. Restoring operations;
e. Developing and Implementing Communication Protocols for Security Incidents;
f. Obtains Understanding of Nature of Incident and Determines Containment Strategy;
g. Remediation Identified Vulnerabilities;
h. Communicating Remediation Activities; and,
i. Evaluating the Effectiveness of Incident Response and periodic incident evaluations. |
|
213 |
SOC_2023 |
CC9.2 |
SOC_2023_CC9.2 |
SOC 2023 CC9.2 |
Risk Mitigation |
To ensure effective risk management throughout the supply chain and business ecosystem. |
Shared |
n/a |
Entity assesses and manages risks associated with vendors and business partners. |
|
43 |
SWIFT_CSCF_2024 |
1.1 |
SWIFT_CSCF_2024_1.1 |
SWIFT Customer Security Controls Framework 2024 1.1 |
Physical and Environmental Security |
Swift Environment Protection |
Shared |
1. Segmentation between the user's Swift infrastructure and the larger enterprise network reduces the attack surface and has shown to be an effective way to defend against cyber-attacks that commonly involve a compromise of the general enterprise IT environment.
2. Effective segmentation includes network-level separation, access restrictions, and connectivity restrictions. |
To ensure the protection of the user’s Swift infrastructure from potentially compromised elements of the general IT environment and external environment. |
|
69 |
SWIFT_CSCF_2024 |
1.5 |
SWIFT_CSCF_2024_1.5 |
SWIFT Customer Security Controls Framework 2024 1.5 |
Physical and Environmental Security |
Customer Environment Protection |
Shared |
1. Segmentation between the customer’s connectivity infrastructure and its larger enterprise network reduces the attack surface and has shown to be an effective way to defend against cyber-attacks that commonly involve compromise of the general enterprise IT environment.
2. Effective segmentation will include network-level separation, access restrictions, and connectivity restrictions. |
To ensure the protection of the customer’s connectivity infrastructure from external environment and potentially compromised elements of the general IT environment. |
|
57 |
SWIFT_CSCF_2024 |
6.1 |
SWIFT_CSCF_2024_6.1 |
SWIFT Customer Security Controls Framework 2024 6.1 |
Risk Management |
Malware Protection |
Shared |
1. Malware is a general term that includes many types of intrusive and unwanted software, including viruses.
2. Anti-malware technology (a broader term for anti-virus) is effective in protecting against malicious code that has a known digital or behaviour profile |
To ensure that the user’s Swift infrastructure is protected against malware and act upon results. |
|
19 |
SWIFT_CSCF_2024 |
8.1 |
SWIFT_CSCF_2024_8.1 |
404 not found |
|
|
|
n/a |
n/a |
|
17 |
SWIFT_CSCF_2024 |
9.1 |
SWIFT_CSCF_2024_9.1 |
404 not found |
|
|
|
n/a |
n/a |
|
57 |
SWIFT_CSCF_v2021 |
2.7 |
SWIFT_CSCF_v2021_2.7 |
SWIFT CSCF v2021 2.7 |
Reduce Attack Surface and Vulnerabilities |
Vulnerability Scanning |
|
n/a |
Identify known vulnerabilities within the local SWIFT environment by implementing a regular vulnerability scanning process and act upon results. |
link |
9 |
SWIFT_CSCF_v2021 |
6.4 |
SWIFT_CSCF_v2021_6.4 |
SWIFT CSCF v2021 6.4 |
Detect Anomalous Activity to Systems or Transaction Records |
Logging and Monitoring |
|
n/a |
Record security events and detect anomalous actions and operations within the local SWIFT environment. |
link |
32 |
SWIFT_CSCF_v2021 |
6.5A |
SWIFT_CSCF_v2021_6.5A |
SWIFT CSCF v2021 6.5A |
Detect Anomalous Activity to Systems or Transaction Records |
Intrusion Detection |
|
n/a |
Detect and prevent anomalous network activity into and within the local or remote SWIFT environment. |
link |
15 |
SWIFT_CSCF_v2022 |
2.7 |
SWIFT_CSCF_v2022_2.7 |
SWIFT CSCF v2022 2.7 |
2. Reduce Attack Surface and Vulnerabilities |
Identify known vulnerabilities within the local SWIFT environment by implementing a regular vulnerability scanning process and act upon results. |
Shared |
n/a |
Secure zone (including dedicated operator PC) systems are scanned for vulnerabilities using an up-to-date, reputable scanning tool and results are considered for appropriate resolving actions. |
link |
13 |
SWIFT_CSCF_v2022 |
6.4 |
SWIFT_CSCF_v2022_6.4 |
SWIFT CSCF v2022 6.4 |
6. Detect Anomalous Activity to Systems or Transaction Records |
Record security events and detect anomalous actions and operations within the local SWIFT environment. |
Shared |
n/a |
Capabilities to detect anomalous activity are implemented, and a process or tool is in place to keep and review logs. |
link |
50 |
SWIFT_CSCF_v2022 |
6.5A |
SWIFT_CSCF_v2022_6.5A |
SWIFT CSCF v2022 6.5A |
6. Detect Anomalous Activity to Systems or Transaction Records |
Detect and contain anomalous network activity into and within the local or remote SWIFT environment. |
Shared |
n/a |
Intrusion detection is implemented to detect unauthorised network access and anomalous activity. |
link |
17 |
|
U.09.3 - Detection, prevention and recovery |
U.09.3 - Detection, prevention and recovery |
404 not found |
|
|
|
n/a |
n/a |
|
21 |
|
U.15.1 - Events logged |
U.15.1 - Events logged |
404 not found |
|
|
|
n/a |
n/a |
|
40 |