Tuesday, November 15, 2016

What Security Is Available for Tandem Guardian 90?

Background

The Tandem Non-Stop system based on the Guardian 90 operating system is a unique processing environment that is different from IBM MVS and DEC VAX/VMS environments due to its fault-tolerant capabilities. This fault-tolerant operation, is accomplished through redundant hardware, backup power supplies, alternate data paths and bus paths, alternate controllers, and mirrored disks (i.e., writing data to two separate disks).
The operating system contains additional functions which support the fault-tolerant capabilities. The operating system provides both multiprocessing (i.e., parallel processing in separate processors) and multiprogramming (i.e., interleaved processing in one processor). The ability to have multiple independent systems loosely coupled as one network eliminates the risk of a single point of failure. The operating system continuously checks the integrity of the system by ensuring that each of the defined processing modules are active. In the event that a processing module fails, the operating system notifies the system and application processes in order to take the pre-specified action, such as re-routing workloads.
The Tandem NonStop system is made up of multiple processes (i.e., program running on the system) running in a loosely coupled environment.
The Tandem system contains multiple CPUs, each with its own private memory and multiplexed input/output channel. The processor modules communicate with one another over a pair of high-peed interprocess or buses (i.e., Dynabus). Peripheral device controllers are connected to the input/output channel of two processor modules so that the device is accessible even if the CPU fails. The system can be expanded by adding additional processor modules (i.e., up to 16 per system). Through the use of Tandem's Expand software, up to 255 systems can be joined to the network. In addition a fiber optic extension (i.e., FOX), can be used to join systems into a network.
The NonStop hardware architecture and the Guardian 90 operating system architecture prevents a single hardware or software malfunction from disrupting system operations. In this parallel processing architecture, the workload is divided among the processors, which perform multiple tasks simultaneously. This system architecture provides maximum throughput since workloads are routed to the least congested processor.
The purpose of this article is to discuss the security controls provided by the Guardian operating system and recommend control approaches.

Introduction to Tandem Security

Basic Guardian security is included with the product, and provides the user with user authentication controls for logging onto the system and file security, The security functions provided in Guardian works directly with the Guardian operating system so that security calls are not made to an external process. Safeguard is a layered product with a separate callable set of procedures.
The architecture of the hardware plays a significant role on how security functions in a Tandem system. Each processor has its own copy of the Security Reference Monitor (SMON). SMON is the braintrust used in determining when to call Safeguard or Guardian for security validation. The overall Security Reference Monitor (CSNP) runs as a fault tolerant process pair that in contained is two different CPUs (i.e., primary and backup) and is responsible for cooridinating updates across all systems. It also ensures that individual SMONs are functioning.
Guardian's file specification consist of files, subvolumes (i.e., stores a collection of files), and volumes. All programs and data files are referred to as files.
When accessing a file, the volume and subvolume associated with the file must be specified. The syntax of a file specification consists of the following:
$xxxxx.yyyyy.zzzzz
where:
xxxxx is the disk drive on which the file resides
yyyyy is the subvolume on which the files resides
zzzzz is the file name
Tandem provides Tandem Advanced Command Language (TACL) that allows the user to logon onto the system, use commands and utilities, and execute programs.
USERIDs are constructed to assign a user to a specific group. The USERID consists of two identifiers (x,x). The first identifier is used to assign the user to a specific group and the second identifier is used to uniquely identify the user.
Privileged user IDs are as follows:
255,255 is the Super ID
255,n is the Super Group
n,255 is assigned to a group manager
Guardian provides internal security which is set at the file level. Therefore, in order to protect all of the files in a subvolume, each file must be protected individually. Tandem's Safeguard security replaces Guardian's security when defining access entitlements. Safeguard allows for security to be set at the file, subvolume, and volume level.
When using Guardian security, file protection is assigned according to the following type of user that is accessing the file:
- Super ID (local only)
U Owner of the file (local or remote)
C member of owners group (remote or local)
N Any user (remote or local)
O Owner (local)
G member of owner's group (local)
A Any user (local)
Local access is referred to as those users that logon to a terminal that is directly connected to the system. Remote access is referred to as those users that logon onto to a remote system. Remote systems are connected to local systems through an Expand link.
Within a Safeguard Security environment, access can be set on an individual user level or for all users within a group using Access Control Lists (ACLs).

Password Controls

Within a Guardian security environment the Binder utility is used to change the PASSWORD and COMIT programs to provide the following controls over passwords:
encrypt password file
prevent passwords from being typed in clear test
minimum password length in a password
It should be noted that these controls are only available when using TACL and is not available through inhouse written command interpreters or the COMMINT command interpreter that was provided by Tandem in earlier versions of Guardian but is no longer distributed in the the Guardian operating system. In addition, these changes are made by altering a program and cannot be done through a set of panels. The setting can be customized for individual users by defining the user with a TACLCSTM file which is stored in the user's default subvolume. However, since the PASSWORD and COMIT programs must be licensed there is no exposure of a user changing their TACLCSTM file to point to their own unauthorized version. The TACLCSTM file should still be secured from the user and other users since it controls the manner in which the user's TACL process functions.
Terminal timeouts after a site specified period of inactivity is defined as part of installing TACL.
Safeguard provides the same level of password control through its command interpreter, SAFECOM.

Privileged Programs

Utilities or programs that are provided by Tandem will not allow a user to update a file in which they are not provided access entitlements to. However, Tandem provides the ability to attach an authority level to a program in which the program will run under.
Therefore, a user that normally does not have access to a file can be assigned execute access to a program which is allowed to update a specific file.
This process of performing access validation based on the authority assigned to the program instead of the authority assigned to the submitter of the program is referred to as PROG-ID. When using Guardian security, the authority of the program is based on the authority of the user who compiled the program. If the program is changed by a different user, then the authority under which the program will execute will change to the user who last changed the program.
When Safeguard is used, the authority of the program is based upon the owner of the program. This is referred to as the PROG-ID Accessor ID. Licensed programs is another form of a privileged program whereby the program may be coded to execute in supervisor state. Licensed programs bypasses all security checking regardless of whether one is using Guardian or Safeguard as the security system. In order to define a program as a licensed program, it must be enabled by the Super ID
PROG-ID and Licensed programs are normally used to allow unprivileged users to perform pre-defined functions which would otherwise necessitate them to have a privileged ID.
When performing a review of the privileged programs, documentation should be available to identify the purpose of each of these programs. If the program is written to perform a privileged function, users which have been provided execute access should be identified to ensure that only appropriate users have been provided with this ability. In addition, the PROG-ID programs should be reviewed to ensure that the programs are not written to allow the submitter to use TEDIT, TACL, or DEBUG to break out of the program and allow a user to access files that they are not authorized to access except through the pre-defined PROG-ID program that was written.

Privileged IDs

The Super ID is the most privileged ID on the system. It has the ability to access all resources on the system which includes the ability to administer security on the system.
However, as of Safeguard version C20, Safeguard has an option to deny the Super ID's access to specific resources using Access Control Lists (ACLs).
Normally, only Group Managers (n,255) and the Super ID can add users to the system. However, in a Safeguard environment, other users can be designated to perform this function by creating an OBJECTTYPE USER authorization record and assign them CREATE (add users) and/or OWNER (change user ID access) by creating an Access Control List (ACL). Normally, only the Super Group (255,n) can admister access entitlements to an object (e.g., file, subvolume, or disk). However, in a Safeguard environment other users can be designated to add users access entitlements by creating an OBJECTTYPE authorization record and creating an ACL to specify the type of access being granted (i.e., CREATE and/or OWNER). It should be noted transferring security administration responsibility is for all objects within the object type. Therefore, if a user is assigned CREATE authority (OBJECTTYPE DISKFILE), then the user can specify access entitlements to all files that are not protected by Safeguard.
Within a Safeguard environment, the privileged IDs that normally had the ability to create user IDs and add access entitlements can be prevented using the OBJECTTYPE OBJECTTYPE authorization records. To establish this control, an OBJECTTYPE OBJECTTYPE authorization record should be created and assigned to secrity adminstrators with CREATE authority. The ownership of the authorization record should be assigned to the security administrator ID and the Super ID should be explicitly denied access. The =INFO OBJECTTYPE OBJECTTYPE command is executed to verify if this control process has been installed. An example of the setting which prevents the Super ID the ability perform security administration functions is as follows:
LAST-MODIFIED OWNER STATUS
OBJECTTYPE OBJECTTYPE
11JUL92, 11:10 100,1 THAWED

150,1 C
255,255 DENY C,O
If it is decided not to remove the security administrative from the Super ID, the alternative in a Safeguard environment, is to freeze the Super ID. This would prevent its use until the security administrator "thaws" the ID.
In a Guardian based security environment, the Super ID is controlled by segregating the functions performed by the Super ID to other IDs. For instance, the ability to access the various resources on the system should be assigned to the various IDs or processes. In the event that specific resources have not been assigned to a particular ID, which would be the case when the Super ID is used, an alternative would be to establish emergency IDs that have access to these functions.
If access entitlements are properly set-up on the system, then there is no requirement to have the Super ID resident on the system, except when a new release of Guardian is being installed.
In a Guardian security environment, the only method that can be used to restrict the Super ID's routine use is to treat it as an emergency ID. When using the emergency ID, the password is divided into two parts and assigned to two different individuals. CMON, which is a separately running process (explained in System Access section) that is called by TACL, could then be used to identify when the Super ID is used.
The Super Group IDs (255,n) have the ability to start and stop processes and administer access entitlements to all objects. However, within a Safeguard environment, the security administrator function of the Super Group can be restricted as previously discussed.
Group Manager accounts are typically used in order to provide decentralized security administration and to allow the group manager to halt their users processes when they are in an endless loop. However, group manager IDs are allowed to logon onto any ID within their group without having knowledge of the password. If Group Manager accounts are used, then CMON should be used to identify when they logon as another user.
Within a Safeguard environment, the Super ID and Group Manager ID may be forced to logon with a password when they logon as a different user.

System Access

Within a Guardian security environment, an owner associated with each file, and is the user who created the file. Each user is assigned their own Guardian Default Vector which provides a default set of protection for all files which the user creates. This approach is the mechanism that is used to provide protection of all files. As mentioned previously, Guardian security is provided only at the file level and is not available for subvolumes, volumes, processes, and devices.
Safeguard provides default protection of all files, subvolumes, and volumes. In addition, Safeguard provides the ability to protect devices and processes. Users are assigned default Access Control Lists (ACLs) which provides default protection for all files created by the user. If a volume, subvolume, or file does not have an ACL specified, then no one has access.
CMON (Command Interpreter Monitor), a process which is called by TACL based on specific TACL commands that are issued by a user. CMON is a program that can be customized by an installation to perform specific actions which can provide preventive and detective security controls. The specific points in system processing at which CMON is called includes:
logons and logoffs
adding and deleting users
changing user passwords
altering priorities at execuion time
unauthorized logon attempts
In order for CMON to be called at the various points of system processing, it must be started as a process called $CMON. The starting of CMON can be performed automatically through Obey files, which is initiated by the Initial_Comint_Infile of the System Configuration file. A privileged user, Super ID, starts these obey files since some of the processes contained in the obey files requires privileged access.
It is important that CMON is automatically started during system start-up, otherwise, anyone who has access at the system level (i.e., using a command interpreter), can start their own CMON which may not be coded to log all activity.
Since CMON is a user written exit, it can be coded to not only establish an audit trail to identify sensitive activity (e.g., Super ID logon), but to prevent certain events from ocurring (e.g., prevent Super ID logon at specific time frames).
However, it is important to note that CMON is only called from TACL. Therefore, if a user starts another process (e.g., TEDIT), the activity of the editors are not monitored by CMON.
Safeguard provides the ability to log security events including the accessing of resources (i.e., successful and failed attempts) and the changing of the security set-up. This information is used to determine when sensitive resources are being accessed or changed.

Software Change Control

Within a change control environment it is assumed that one has various libraries that a change must migrate through, such as program development libraries, Test libraries, and production libraries. The typical control has a segregation of duties between the person who develops the program and the person who migrates it to the test and production libraries.
This approach can be performed within a tandem environment by restricting libraries to the appropriate person at the subvolume level, when using Safeguard. However, when using Guardian, security is only at a file level. Therefore, all programs that are being changed must be previously protected before they are migrated. Since one does not want to be in a situation where security is required to be changed prior to every program migration, the alternative is to assign the person responsible for migrations a separate ID which has default protection (i.e., using Guardian default security vector). This restricts all access except for the owner who is the migrator. This same approach may be used in a Safeguard environment using the default Access Control List.

On-line Transaction Processing

Pathway is the transaction processing method provided by Tandem. In a typical environment, a separate Pathway process would be assigned for each application.
Safeguard may be used to restrict those users who are able to logon to Pathway and to restrict the resources that can be accessed by the applications defined under Pathway. By restricting the resources defined in a Pathway application, Safeguard ensures that only access to those resources can occur through the Pathway application. In addition, Safeguard can optionally restrict other processes (e.g., a user TACL session or another Pathway application) from linking to Pathway process.
Safeguard does not provide the ability to restrict specific transactions (i.e., screens) or resources (i.e., data) that its users can access.
Within a Guardian security environment each Pathway application can be restricted to specific objects that they require. However, Guardian does not provide the ability to assign user IDs to run specific Pathway applications. Therefore, security must be established within the application or using a third party vendor product.

Safeguard Installation

Safeguard is a process that can be started in three different ways depending on the level of security that your installation requires.
The most secure installation approach is to SYSGEN Safeguard onto the system. This aborts the entire load of the system if Safeguard cannot be started. The second approach is to start Safeguard as part of the normal system initialization process (i.e., specifiy the initialization of the Safeguard process in the CIIN file). The third approach is to start Safeguard after the system is up and running.
The second and third method would provide for the dynamic shutdown of Safeguard while allowing system processing to continue. This would only allow the owner of a file, the owner's group manager, and the Super ID to have access. This restrictive process is used if the Safeguard security bit, which is contained in each file header, is set when the file is created. To ensure that this bit is set, it is suggested that each user's Default Access Control List contain this specification..

Sunday, October 09, 2016

NonStop File Integrity: Check It! Protect It!

File Integrity Monitoring on NonStop


File Integrity Monitoring (FIM) is an important requirement of the PCI data security standard for maintaining confidential (e.g. cardholder) information, and is considered a crucial part of protecting business assets.

NonStop systems are now being used in far more dynamic situations and have more external connections than ever before. The ubiquity of payment cards for personal electronic transactions has changed the security equation in a fundamental way.

Any compromise in security is likely to have far reaching consequences, both for the immediate damage that may be done in terms of financial loss, and for the wider damage done to a merchant’s reputation. The security of personal cardholder information has become paramount.

In this context, FIM should be considered an important security necessity, not just for PCI systems, but for all NonStop systems.

FIM and PCI DSS


PCI DSS Requirement 11.5 stipulates that members must “ Deploy a change-detection mechanism (for example file-integrity monitoring tools) to alert personnel of unauthorized modification of critical system files, configuration files, or content files; and configure the software to perform the critical file comparisons at least weekly.”

In version 2.0 of the security standard, in a clarification to Requirement 11.5.b, it was further specified that it is an audit requirement to “Verify that tools are configured to alert personnel to unauthorized modification of critical files.”

PCI DSS Requirement 11.5 version 3.1 further clarifies that unauthorized modifications include changes, additions, and deletions of critical systems files. It is clear from these excerpts that FIM is a key requirement  of PCI DSS, and therefore a FIM solution must be implemented on any system that handles cardholder information.


What is FIM?



FIM includes any technology that monitors files for changes. Assuming that at least some file change is expected on a system, then FIM’s primary purpose should be to identify possible “bad” changes so that they can be rolled back or remediated in some way.  A “bad change” is any change that is undesirable. This is not the same as an unplanned, unauthorized or suspect change.

An unplanned change is not necessarily a “bad” change. Most system administrators have found it necessary to intervene on occasion to remedy a problem. Their actions might include changing a configuration parameter, or perhaps changing the security of a file due to an oversight.

In both cases, the change is both unplanned and unauthorized. Regardless, the change must be appropriately recorded and reported, then reviewed and either made permanent or modified.

Of course other changes that may be unplanned and unauthorized can be part of an active security threat, in which case FIM may provide the first notice that the system has been compromised.

Accidental change represents no less of an issue and is probably the most likely source of unplanned and unauthorized change. FIM can also be used as part of a change control regime, whereby planned changes are detected and recorded to have occurred as expected

Components of FIM

Basic FIM functionality should allow the administrator to:

1. Create and store a baseline for specified files and their attributes of interest
2. Update the baseline to take into account planned or allowable change
3. Run periodic checks and report the results
4. Store the results of each check

Conclusion


FIM is a critical requirement for security, and key to PCI DSS compliance.

However, the detection of any particular change is just the start of the process. To be effective, FIM solutions must differentiate low-risk from high-risk change; integrate with other security solutions for log and security event management (including real-time alerts) and support a fully-managed history database of changes.

CSP’s File Integrity Checker (FIC) is widely used by financial institutions to deliver FIM in NonStop Guardian and OSS environments, and is tightly integrated with CSP’s other solutions for audit, compliance and Safeguard, EMS and Base24 OMF real-time event monitoring. FIC’s new “Guardian Fileset Compare” feature permits the attributes of any two file sets on any two systems to be compared against each other.

New solutions on nonstop to protect your sensitive data - HPE SecureData Enterprise

For the past few decades, HPE Integrity NonStop systems have been the preferred mission critical computing platform in industries such as financial payments. The key reason for this is the mission critical computing features that NonStop systems offer which very few other platforms can match.

With a Massively Parallel Processing (MPP) architecture, that provides
- unmatched scalability, and
- an integrated software stack that builds high availability right up to the application layer,

NonStop offers unique benefits that has made it a platform of choice for the industry.

The IT systems supporting enterprises in industries such as payments, retail and healthcare contain and manage very sensitive information such as customers’ credit card numbers and personal information such as medical history.

A database containing millions of these customer records is an attractive target for malicious hackers, who try every possible means to steal the data and monetize it in various possible (and often creative) ways. Their success comes at an enormous cost if the target organization becomes a victim of such an attack.  Such organizations end up paying dearly in terms of regulatory fines, lost business, loss of reputation, customer compensation cost and so forth.

Various industry and government regulations have been in place and new ones are in the works which aim at protecting consumers from such breaches and guiding the industry towards implementing solutions and practices which mitigate these risks.

The Payment Card Industry Data Security Standard (PCI DSS) and General Data Protection Regulations (GDPR) are examples, but there are many others in different geographies and industries. The regulations are fairly comprehensive and cover all aspects of protecting sensitive data. It’s rather difficult to explain these regulations in a few lines but the overall philosophy is to:

• Protect the computing environment from external and internal attacks
• Protect the data throughout its life (at creation, as it traverses from one node to another, as it’s processed, and on media - whether live or archived)
• Implement strong authentication and access control measures
• Ensure adequate logging mechanisms to enable forensic analysis in case of a breach
• Document and enforce practices and policies; educate employees

Given the importance of data protection, NonStop has been offering security solutions for many years in order to enable you to meet these stringent security requirements using standards-based cryptography.

Beginning in early 2016, the HPE NonStop Enterprise Division (NED) launched several new products with an aim to provide additional modern, data protection solutions to NonStop customers. These are essentially two product suites:

• HPE SecureData Enterprise
• HPE SecureData companion products for NonStop

HPE SecureData Enterprise

HPE SecureData Enterprise is a unique, end to end data protection platform used by enterprises in a variety of environments. This product is offered by HPE Software group’s Data Security business unit formed with the merger of the erstwhile Atalla products group and Voltage Inc., which was acquired by HPE in early 2015.

The NonStop Enterprise Division (NED) and the HPE Data Security group have teamed up to offer HPE SecureData Enterprise to NonStop customers. In traditional data protection methods, customers employ different techniques for the different environments that the data passes through.

Examples are user access control, file encryption, TLS or SSH protocols for data in transit, disk/volume encryption for secondary storage etc.

Each of these may involve separate cryptography, hand-shake, key protection etc. and the data may be in the clear while in between the stages. Overall, this offers a piecemeal approach and not the best protection for your data.

HPE SecureData Enterprise approaches this topic from the perspective of data-centric security that comprehensively protects the sensitive data in an enterprise. Using a data-centric approach to security, the sensitive data is protected right where it is created, as it traverses through the network, while it’s processed/stored in different nodes, used in analytics and when it is archived. At all stages, the data is in encrypted or tokenized form such that, even if there is a successful breach, the data is unusable by the cyberattacker.

Figure 1 below compares the data-centric security offered by HPE SecureData to the traditional methods of data protection


Three key technology elements are at the core of this solution:
  1. HPE Format Preserving Encryption (FPE),
  2. HPE Secure Stateless Tokenization (SST), and
  3. HPE Secure Stateless Key Management.
1) HPE FPE is a technology used to encrypt data without changing its original format. While it provides the same encryption strength as the traditional encryption technologies do, the key advantage of FPE is that, because it preserves the data format through the encryption process,

- the database which stores the data or
- the applications which process it do not need to be modified, and
-  the majority of applications and processes operate on the data in its protected form—no decryption necessary for use.

This drastically brings down the cost and complexity of transforming an  existing solution from an unprotected to the protected form, and reduces the exposure of sensitive data to attack.

2) HPE SST is a related technology available in HPE SecureData, and is used to protect sensitive data elements in a file or a database by replacing them with tokens.

- HPE SST is recommended for use with Primary Account Numbers (PANs) used in payment cards.

In this solution, a token table consisting of a static, pre-generated table of random numbers, created using a FIPS-validated random number generator, resides on the platform.

A PAN, the data to be tokenized, uniquely maps to a token in that table but has no relationship to it. That token is stored in the system (in files and databases) in place of the PAN (plain data) which is now said to be "tokenized".

Only trusted applications are allowed to detokenize and derive the original PAN. Hence, in contrast to the traditional tokenization technologies where a separate “token vault” is maintained, the HPE SecureData solution has no token vault and hence no cost or management complexities associated with it, and no database or vault to be targeted for cyber-attack. Because the system does not store plaintext data in any form, it is outside the scope of PCI audit, thus greatly reducing the costs of PCI compliance. Moreover, token vaults grow in size with the amount of customer data maintained in them, which adds to the management complexities and challenges in scaling and application performance. An SST-based solution, in contrast, does not have these challenges and hence is highly scalable, typically yielding a strong ROI.

3) HPE Secure Stateless Key Management simplifies the key management needs of the HPE FPE solution through another landmark innovation. It securely derives the key on-thefly thereby greatly reducing the cost and complexities of key management. It can authenticate key requests using industry-standard identity and access management infrastructure such as LDAP or Active Directory.

HPE SecureData Enterprise provides a comprehensive data security solution across a crosssection of computing platforms commonly used in the industry. It is supported on traditional *NIX environments, IBM mainframe, HPE NonStop, open systems, cloud, mobile and Big Data environments such as Vertica, Hadoop, and Teradata. This breadth of support enables you to standardize on a single solution to address the data protection needs across your enterprise, which may use one or more of these platforms. Apart from protecting the data, HPE SecureData gives you immense benefits in terms of managing the cost and complexity of the solution and eliminating security weaknesses in the enterprise.



Tuesday, June 14, 2016

Tandem Computers, Inc.

The Tandem computer was the granddaddy of fault-tolerant systems. Tandem’s first system was delivered in 1976. Forty years later, its original architecture remains the dominant fault-tolerant technology. Then and now, a Tandem system was a loosely coupled multiprocessor system that contained anywhere from two to sixteen independent processors in a node. Up to 255 nodes could be included in a single network, linked via Tandem’s Expand communication network.


The processors in a node were linked via a duplexed, interprocessor messaging bus called the Dynabus, capable of a 26 megabyte/second data rate.

All device controllers were dual-ported so that there was always a path to a device even if a processor failed. All critical processes ran as process pairs in two different processors. One process was the primary process, and one was the backup process. The primary process kept its backup process synchronized via checkpointing messages. Should the primary process fail (presumably due to a processor failure), the backup process took over and continued processing with no apparent interruption to the user. (Tandem’s later inclusion of the Pathway process monitor eliminated the need for application programmers to write checkpointed process pairs.)

With Tandem’s second release of its product, the Tandem NS2, each processor could be configured with two megabytes of memory. Each mirrored disk pair could provide 128 megabytes of memory (yes, that’s megabytes, not gigabytes).

Tandem was acquired by Compaq in 1997, which then was acquired by HP in 2002. Tandem computers are now known as HPE NonStop computers. (HPE is HP Enterprise, one of the two companies that resulted from the split in 2015 of Hewlett Packard into HP, Inc., which sells HP personal computers and printers, and HPE, which markets HP server, storage, and networking systems.) 

Sunday, June 12, 2016

Techniques - Which can use to influence the SQL compiler's choice of execution plan

Below techniques which can influence the SQL compiler's choice of execution plan.

  1. Performing Update Statistics
  2. Creating or Modifying Indexes
  3. Changing the Query Text
  4. Using SQL CONTROL Statements

1.Performing Update Statistics


The SQL compiler uses table statistics to help estimate the cost of potential execution plan candidates. (By the way, don't confuse table statistics with the execution statistics we covered above. 

Table statistics tell us about the distribution of data values in the columns of a table.) Table statistics are not maintained automatically. You have to tell NonStop SQL to re-calculate them on an on-going basis. You use the UPDATE STATISTICS command for this. 

Not only is it important to keep table statistics up to date, you must also generate the right statistics. In general, it is a good idea to update statistics for all key columns. If you are tuning a poorly performing query, make sure there are statistics for the columns involved in the filter conditions. For MX, be sure to include multicolumn statistics when a table filter refers to more than one column.


2. Creating or Modifying Indexes

Adding an index is often the best way to improve query performance.

However, you need to consider the additional cost that will be incurred for INSERT, UPDATE, and DELETE (IUD) operations against the table. An insert or delete against the table will also cause an insert or delete against all of its indexes. An update will cause an index to be updated if the index includes any columns that are changed by the update.

For a table that is not too volatile, creating an index to support an important query is often worth the extra cost incurred by IUD operations.

You might consider creating an index to:
• Support a filter, avoiding a full table scan.
• Support a join, if you suspect a nested join supported by an index will perform better than a hash join.
• Support an aggregate, enabling the use of an index only scan instead of a table scan.

You might consider adding columns to an index to avoid a table scan. If an index contains all of the columns required by a query an index-only scan can be used, this is almost always more efficient than a table scan (since index rows tend to be much smaller than
table rows).

3. Changing the Query Text
It is often possible to write a query in more than one way, yet still 
get the same rows in the result. Changing the way that a query is 

written can allow the SQL compiler to choose a better execution plan.



Here is a trivial example to illustrate the point:

Consider this query:

SELECT * FROM DEPT D 
WHERE D.DEPTNUM IN (1000, 3000, 4000)



The DEPT table has a primary (cluster) key of DEPTNUM.

The compiler generates a plan that performs a full table scan on
DEPT, and applies the filter predicate to each row.

If we re-write the query like this:
SELECT * FROM DEPT D
WHERE D.DEPTNUM IN (1000, 3000, 4000) AND
D.DEPTNUM >= 1000 AND
D.DEPTNUM <= 4000

This returns the same results, but the compiler generates a 
plan that does a subset scan that only reads the rows between 
DEPTNUM 1000 and 4000.


4. Using SQL Control Statements


You can use SQL CONTROL statements to influence the behavior of the SQL compiler. You can use them to directly control join order, join methods, access paths, and parallelism.
There are also less direct options like specifying selectivity estimates for example.

The MX CONTROL QUERY DEFAULT (CQD) statement supports a large number of options, including a number that are useful in query tuning.

The JOIN_ORDER_BY_USER attribute allows you to control the join order used in the execution plan. The tables are joined in the order in which they appear in the FROM
clause.

The HASH_JOINS, NESTED_JOINS, and MERGE_JOINS attributes can be used to enable or disable the corresponding join methods. Unfortunately, they apply to the query as a
whole, and not to individual tables.

The INTERACTIVE_ACCESS and INDEX_ELIMINATION attributes can be used to encourage or discourage the use of indexes.

The ATTEMPT_ESP_PARALLELISM attribute controls the generation of parallel plans.

The MX CONTROL QUERY SHAPE (CQS) statement provides a means to force the use of a particular execution plan. However, its use is generally discouraged. A change to
the database or the SQL compiler may make the forced plan inoperable. CQS statements are hard to write, especially from scratch. The best approach is to use CQD statements (including
the JON_ORDER_BY_USER option) to get a plan that is as close as possible to the desired plan. Then use the SHOWSHAPE statement to generate the CQS statement that reflects the current plan. Finally, make adjustments to the generated CQS statement.

The MP CONTROL TABLE statement provides direct control over the join order, join methods, and access paths used in the execution plan. The CONTROL EXECUTOR
statement allows for the selection of parallel or serial plans.

Using CONTROL statements is a two-edged sword. They give you a way to influence the execution plan produced by the compiler, but at the same time they may prevent the compiler
from finding a better plan.

Measuring Query Performance


In query tuning, there are two motivations for measuring performance: 

To compare the relative performance of your tuning experiments. This enables you to determine if an experiment improves performance compared to the baseline.

To measure the amount of work being performed when executing the query. This can help identify unnecessary work, for example database records that are read, but not used. The gold standard for monitoring performance is to use Measure.

You can use Measure to collect very detailed information on all aspects of query performance. However, it is quite labor intensive to gather and analyze a complete set of Measure data for a query.
This is especially true for non-trivial queries that access many tables and involve many ESP and DP2 processes.

A more practical approach is to use the execution statistics provided by SQL. SQL execution statistics include the elapsed time for compiling and executing the query, as well as per-table statistics. The per-table statistics include:

Records read and used. The number of records read from a table and the number that were actually used.

Number of messages and message bytes. This is a measure of message system activity, and is a good analog of IO load.

Lock waits and escalations. This is an indicator of lock contention. The response time of the query will suffer if it encounters lock waits. Unfortunately, neither VQP nor VQA will gather the SQL execution statistics for you. If you are using one of these tools for the execution plan analysis, you will need to use MXCI to gather the execution statistics. The easiest way to do this is to use the SET STATISTICS command. (MP users can use the SQLCI SET SESSION STATISTICS command.)

Once the statistics option is turned on, the execution statistics will be displayed automatically after preparing and/or executing the query. The SQLXPress Visual Query Tuner automatically collects and stores execution statistics as part of performing a tuning experiment

Query tuning

Query tuning is a multi-step process that generally goes something like this:



1. Establish a baseline. Generate the execution plan and establish the current performance characteristics of the existing query. This gives you a base for comparison when you perform tuning experiments. 

2. Analyze the current execution plan. You should understand the path to the data that is being taken by NonStop SQL. 

3. Identify problems with the existing plan, and come up with a better plan.

4. Make changes to get the plan you want. This is not always obvious. You may have to perform multiple tuning experiments to arrive at the solution.

5. Having implemented the changes, generate a new execution plan, and re-measure the performance. 

6. Compare the results with the baseline. If things have improved, incorporate the change into the new baseline, otherwise abandon the change. 

Repeat steps 2 through 6 until you are satisfied with the query performance.

Why SQL Query runs Slowly?

Introduction 

Query tuning is the process by which you answer the question "How can I make this query run faster?"

There are many possible reasons why an SQL query runs slowly.

• The system may simply not have enough processing power, memory, or disk bandwidth to handle the application load.

• There may be a problem with the system configuration, such as insufficient disk cache, MX buffer space, or kernel managed swap file space.

• The files used to store the database tables may be badly fragmented. Such "system-level" problems are outside the scope of query tuning, and must be addressed separately before query tuning can be effective.

The query tuning process treats the query text as the sole statement of the problem. Its goal is to produce an execution plan that runs the query faster, while returning the same set of rows as before.

Why Queries run slowly?



From a business process perspective, most reasonable queries will return a moderate number of rows.

For a reasonable OLTP query, the maximum number of rows returned is around one hundred or so (and is often much less than this).

The user of an interactive query doesn't want to browse through many more rows than this.

Even for a reasonable batch or reporting query, the user does not want to browse through more than a few hundred (or very few thousand) or so rows.

If a batch-type query returns more than a few thousand rows, it needs to be broken down into multiple more closely filtered queries, perhaps aimed at multiple classes of user. Modern NonStop systems are quite capable of executing a query that returns a few hundred (even a few thousand) rows in a reasonably short time.

So the question arises;

Why does a query that returns a modest number of rows take too long to run? One answer is that the query accesses and processes too many rows that are eventually discarded, and don't contribute to the final query result.

The goal in this case is to avoid performing such non useful work.

Another such class of query is one that returns a reasonable number of rows, but which is required to aggregate a very large number of rows in order to produce the required results.

A query containing a GROUP BY clause is a common example of this. These queries can be difficult to tune. A common solution is to preaggregate the data in the database, and to query the summary data without accessing the details. The careful use of triggers can be used to synchronize the redundant summary data.

Saturday, March 26, 2016

Martin Fink Speech Summary about HP Nonstop

Recently I read the interview of Martin Fink, the CTO of HP Nonstop Systems. Though it was addressed to HP Employees.  Few things, still applicable to all the NONSTOP users.



I was really impressed by his few words and Rules. Here's those :

Rule 1 : Be selfish

Rule 2 : Make a contribution

Rule 3 : Observe the Customer

Rule 4:  Know your Value Proposition
                 - Data Integrity
                 - Scalablity
                 - Fault Tolerance

Rule 5 : Understand the technology at a very DEEP Level.

                 - Nonstop should Transform from Hardware business to Software Business

Rule 6 :  Structure Follows Strategy

Rule 7 :  Know your Business Model

What's the NEXT for NONSTOP ?

We need to take NONSTOP software strategy into Mainstream. The mainstream software world today tends to revolve around 4 things:

- Linux
- Virtualization
- Open Source
- Cloud

In the HP Labs, Nonstop is running in VM on Linux. Soon, It will hit the market.

And also, bring on X86 and using Infiniband interconnects allows to build HYBRID SYSTEMS that combine Linux, Windows with NONSTOP. That will be POWERFUL COMBINATION.

Opensource is a Business choice. Not a Technology Choice.

Cloud aspires to what Nonstop Already is.





Friday, March 25, 2016

Additional Security Considerations for Pathway


PATHWAY configuration (per PATHMON)

The OWNER attribute specifies the owner of the PATHMON environment. That user can stop the

- PATHMON process,
- Add programs,
- Delete objects,

and make other modifications to the global configuration.

The SECURITY attribute, whose value is relative to the OWNER’s user ID, specifies who else can modify the PATHMON environment after you issue the START PATHWAY command.

  • The default is “N” for TS/MP 2.0 and 2.1, and should be changed to “O”.
  • The default is “O” for TS/MP 2.3 and later versions.

For all TS/MP versions, prior to issuing the START PATHWAY command the owner is the process access ID (PAID) of the PATHMON process and the SECURITY attribute is O (owner only).

HP recommends that the OWNER of a given PATHMON environment be a user ID associated with management of that specific application, rather than SUPER.SUPER or a member of the SUPER group.

SERVER configuration (per server class)

OWNER specifies the user ID that controls access from a Pathsend process to a specific server class. (The TCPs ignore this server attribute.)

If not specified, OWNER defaults to the user ID who started the PATHMON process. Specify an appropriate user.

SECURITY specifies the users, in relation to the OWNER attribute, who can access a server class from a Pathsend requestor. (TCPs ignore this attribute.) The default is “N”, which should be changed to the most restrictive setting that does not interfere with correct application operation.

For Guardian servers, ensure that server class ASSIGNs and DEFINEs point to the appropriate files, and that the server class volume is explicitly set. Similar considerations apply to CWD and ARGLIST for OSS servers.

For OSS server classes, UMASK specifies the default permissions for the owner, group and others for OSS files created by the server process instance (see General OSS file security for more information on umask).

The default for UMASK is -1; HP recommends that you set it to a more restrictive value (022 or tighter).

Network security

The PATHMON process controlling the server class has to have corresponding user IDs and remote passwords with all of:

• The system where the requesting process is running
• The system where the PATHMON process is running
• The system where the server class is running

This level of security is required because the LINKMON process or the ACS subsystem processes must be able to open the PATHMON process (to make link requests);

the LINKMON process or the ACS subsystem processes must be able to open the server processes (to send user requests);

and the PATHMON process must be able to open the server processes (to send startup messages).

All of these opens are performed with the PATHMON user ID.

Note: If the user starting the PATHMON process is an alias, then the alias must have matching remote passwords on all involved systems. It is not sufficient for the underlying user ID to have matching remote passwords

Note: The user ID of the Pathsend process need not have remote passwords to the PATHMON system or to the server-class system to access the server class.
Moreover, the Pathsend-process user ID need not be known on the PATHMON or serverclass
systems.

Server-class security

LINKMON processes or ACS subsystem processes perform authorization checks on each server-class send operation to make sure that the user ID of the Pathsend process at the time of the send conforms to the server class’s OWNER and SECURITY attributes. You set these


Sunday, March 20, 2016

Security Hardening

In Simple Terms, Security Hardening describes the implementation of a series of measures designed to make a system less vulnerable to attacks. A hardened system has a reduced number of attack points making it more difficult to be compromised.

Obvious security recommendations, such as the use of strong passwords, are helpful measures
when trying to prevent breaches or Hacks.



But the implementation of stronger defenses, such as ensuring strong OSS ACCESS permissions, can more challenging to carry out without correct guidance.

And unfortunately,  Information regarding security hardening practices tends to be scattered, difficult to analyze and implement.


Security Hardening should be considered as an ongoing project that will require continuing attention and updating. Companies must constantly look for the latest methods, rules and best practices to ensure that system Data and Services are protected.

Saturday, March 19, 2016

Martin Fink's (CTO for HP Nonstop Solutions) View on HP Nonstop

https://www.youtube.com/watch?v=xjUdz8fZ5Wg

Additional security considerations for FUP

General:  Background

You need both read and write access to a file in order to issue an ALTER command against it.  To rename a file, you also need purge access if you are not the super ID.

The FUP INFO command can be used to identify all Guardian files that have LICENSED, PROGID, CLEARONPURGE and/or TRUST set.  It also can display the underlying Guardian security of a file protected by Safeguard at the individual file level.

FUP is used to license SQL/MP object program files.  They cannot be licensed through SQLCI. For SQL/MP files, GIVE applies only to object files.  You must use SQLCI to give away ownership of other SQL/MP files. 

Similarly, PURGE and PURGEDATA apply only to object files and SQLCI must be used for other objects. FUP can display information about OSS files (including their security vectors and whether they are protected by OSS ACLs), SQL/MP object program files and SQL/MX files, but cannot manipulate them in any way.

Non-Safeguard-protected files:  background

You can preserve the source file’s owner ID and Guardian security vector in the copied file if you use SAVEID or SAVEALL with FUP DUP.  The LICENSE attribute is preserved only if the PAID of the current FUP is SUPER.SUPER and the target file resides on the node where FUP is running. 

The same rules apply to PROGID, except that it also is preserved if the PAID of the current FUP is the file owner.  CLEARONPURGE is transferred unconditionally. You need to be either the owner or SUPER.SUPER to GIVE ownership of a file. 

If you are not SUPER.SUPER, you also need purge access to the file. GIVE clears PROGID.  After the GIVE you need to use SECURE to set it again; the usual rules apply. You can use REVOKE to reset CLEARONPURGE and PROGID if you are either the file owner or SUPER.SUPER.  You must be SUPER.SUPER to revoke a file’s LICENSE attribute.

Safeguard-protected files:  background

You need create access to the destination volume and subvolume as well as read access to the input file in order to duplicate a Safeguard protected file.

Caution:  If you use DUP with the PURGE option but do not have create access to the target file, the original file at the target location is purged but the new version is not created.

Caution:  A file’s Safeguard protection is not automatically inherited by the target file.  It will inherit any applicable volumelevel and subvolume-level ACLs on the target volume, and will not be Safeguard-protected if none apply. 

You will need to use SAFECOM to restore or set Safeguard protection for the new file. As with non-Safeguard-protected files, for FUP DUP both SAVEID and SAVEALL transfer the source file’s owner and corresponding security to the target file.

General:  Best practices

You can set the NOPURGEUNTIL attribute to prevent a file from being purged before a specified date and time.

You can use FUP INFO to identify all of the Guardian files owned by a specific user. You can find out what processes and associated users have an Enscribe or SQL/MX file open by using the LISTOPENS command.

Caution:  PURGEDATA does not physically purge the file contents; it simply resets the end of file to zero.  This applies whether or not the file (or system) has CLEARONPURGE set.  PURGE access allows deletion of both a file and its contents, but you can effectively purge the contents of a file without PURGE access through a combination of PURGEDATA and DEALLOCATE.

Caution:  When a file with CLEARONPURGE set is purged, its disk process is going to rewrite the contents with zeros up to the end of the last allocated extent.  The disk process has some built-in pacing for the writes, but this activity still has the potential to negatively affect application and system performance.

SQL/MX object security

SQL/MX object security:  Background 

SQL/MX executes within the OSS environment, but database files reside within the Guardian environment.  For SQL/MX tables, data access uses the ANSI GRANT/REVOKE authorization model. 

SQL/MX is not tightly integrated with Safeguard; however, with SQL/MX version 3.3 and later, Safeguard volume protection can be used to control where SQL/MX objects are created.  SQL/MX uses Guardian security for SQL/MP objects, Installation and fallback:  cautions The initial installation of SQL/MX, done with the InstallSqlmx script, must be run by SUPER.SUPER (not an alias to it).

Do not change the security setting on the anchor file created by SQL/MX.  If the file is modified, SQL/MX ceases to operate.

If you do not use DSM/SCM to do a SQL/MX installation, you must explicitly license the ZCLIPDLL public DLL. 

If this step is not performed, program load errors might occur.

Make sure that the following files are licensed in $SYSTEM.SYSTEM:
• IMPORT
• MXUTP
• MXIMPDDL
• MXAUDSRV
• MXCMP
• MXESP
• MXRTDSRV
• MXTOOL

Make sure that the mxci file in /usr/tandem/sqlmx/bin has both read and execute permissions.

Best Practices

Create a separate SQL/MX security administrator group if running 3.1 or a subsequent version. 

Security administrators manage access to SQL/MX data but they do not have access to the underlying data itself unless explicitly GRANTED access by an object owner or designee or through PUBLIC access. 

Use this group to administer security. 

Consider having only one user create database objects (such as a user designated as the database administrator). 

In that way, no users other than the security administrators and database administrator may grant object access. Periodically audit the set of security administrators. To obtain detailed security administrator information:
OSS> mxci  >> SET SCHEMA nonstop_sqlmx_<system name>.system_security_schema; >> SELECT * FROM privileged_users FOR read uncommitted access;

Use GRANT/REVOKE CREATE CATALOG to restrict the set of users who can create catalogs.

Use ANSI VIEWs to limit access to specific fields of a table to those users entitled to see their contents best practices