Saturday, December 26, 2015

IoT(Internet of Things) in HP Nonstop

IoT starts rolling out companies, will require both data-in-motion and data-at-rest processing to have a complete IoT solution.

HAVEn does an excellent job for 'data at rest'.

NonStop with some additional players will be a great fit for 'data in motion'.

Now just as NoSQL (which stands for Not only SQL) which are new methods and databases for enhancing and supplementing SQL databases, I believe NonStop in this data in motion space can supplement HAVEn. So with NonStop we have 'not only' HAVEn but NonStop and HAVEn truly better together.


Wednesday, December 16, 2015

COBOL Programming - Tip

Tip #1 

In a Tandem COBOL program, always use ENTER "COBOL_COMPLETION_" USING 1 
to abend a program. 

This preserves the call stack in the ZZSAnnnn saveabend file.

Tip#2

When you process information within a field with a known structure don’t use Reference Modification.


Make a REDEFINES of the field using self-explaining named fields, just to be sure that during maintenance not all the references should be changed.

Tip #3

Do not use the LINKAGE SECTION as working area. 

Tip#4

Use  ?SOURCE standard .
The exception is the use of the replacing option, then  “COPY” must be used. Explanation:
-          ?SOURCE is needed for SQL pre-compile options
-          COPY knows the replacing option ?SOURCE doesn’t.
-          COPY knows default library’s when you use ?SOURCE you should always give the needed library.
?SOURCE could contain library text (source directive could be nested, eq SQL statements) COPY can’t handle this

Thursday, December 10, 2015

HP Nonstop COBOL and Z/OS COBOL difference

I have written a White paper on HP Nonstop COBOL and Z/OS COBOL difference. Here's the link for that.

http://www.cognizant.com/InsightsWhitepapers/A-Checklist-for-Migrating-Big-Iron-Cobol-Applications.pdf


Wednesday, December 09, 2015

SQLMP Programming TIP

In the SQLMP programming, dont use the SQL Statements like

 - BEGIN WORK
 - COMMIT WORK
 - ROLLBACK WORK

Instead , pls use the equalent TAL statements
      START TRANSACTION
      ABORT TRANSACTION and
      END TRANSACTION.

The reason is, SQL Statements like COMMIT WORK , also performs FREE RESOURCES which will results in closing cursors.

NSSQL Tip #5 - Insert into a relative table

By default insertion in a Relative tabel is done at the end of the table. If there is no room left at the end of the table,  a “table-full” message is issued.

To avoid this, the ANYWHERE-option of the insert-statement must be used. With this option rows can be inserted in any available area.

Alternatively it is possible to assign a value to the SYSKEY. In that case the ANYWHERE option is not needed.

Tandem SQLMP and Z/OS DB2 Difference

I have written a white paper on the difference between SQLMP and DB2 sometime back. Here's the URL for that.

http://www.cognizant.com/insightswhitepapers/Dynamics-of-Leading-Legacy-Databases.pdf

Tuesday, December 08, 2015

COBOL Pathway Server Class - TIP

A server may only perform a "STOP RUN" statement, when an EOF is received on $RECEIVE.

The Pathway monitor is not aware of a stop of the server-process and when it grants another request for that particullar server-process, the requester gets an error which results in unnecessary cpu-resources.

Screen COBOL - Tip

Overlay screens may have a maximum depth of 3.  (This is base screen/overlay/overlay)

If a screen is defined with many overlays, the memory resources of the TCP program fill up and result in memory swapping to disk.

Influence the Query Plans

When to Influence the Optimizer

• Poor Plan selected by the optimizer results in poor execution time
• Provide a workaround when query results in a compile time error
• Compilation takes a long time: Limit the search space to get better compile time
• In embedded static SQL applications values of host variables used in predicates are not known
at compile time, example
− WHERE T.col1 = :hv
− WHERE T.col1 BETWEEN :hv1 AND :hv2












Influencing the Optimizer —

Query Transformation
• Query formulations sometimes can be tuned to help optimizer
    −Example: prefer use of joins instead of subqueries;
      while SQL/MX converts many subqueries into joins there are some correlated ones that it cannot
• Specify all applicable predicates for positioning and joins
• When using multi-valued predicates (row constructors) with clustering key columns force
MDAM to avoid scans.

Best Practices
• Give optimizer as much information as possible
      − Efficient predicates
      − Consider window predicates
• Use CQDs for optimizer hints
       − HIST_DEFAULT_SEL_FOR_PRED_RANGE
       − INTERACTIVE_ACCESS
• Some CQDs may be set globally in SYSTEM_DEFAULTS table
• Use CQS as last resort
− Only force basic outline of a plan and let optimizer find the optimal solution within the boundaries imposed by the command

Wednesday, November 25, 2015

NSSQL - Control-statements

Control-statements are compiler-directives. Their impact is only at compile time. The resulting compiled queries are stored in the Query Execution Plan (QEP). At runtime this static QEP is used to access the database.

The scope of a control-statement extends can be ended by an other control-statement. All SQL-statements in the scope of a control-statement are influenced by this control-statement.
Control-statements that are of importance for a certain have to be coded bfore that query.
General control-statements (e.g. control-statements tha are used to influence table locks) are usually coded in the first part of a program.


In some cases a control-statement is needed to influence only one query. For instance to control the access-path of that query or to influence the block-buffering. It is good practive to code such a control-statement right in front of the query that has to be influenced and to code another neutralizing control-statement right after that query in order not to affect the following queries.

Example:
 *----------------------------------------------------------------------
  S080-COUNT-REK-MULTI-OVK SECTION.
 *----------------------------------------------------------------------
       EXEC SQL
        CONTROL QUERY INTERACTIVE ACCESS ON
       END-EXEC

       EXEC SQL
        SELECT COUNT (DISTINCT MULTI_OVK_NR)
          INTO :WS-COUNT-MULTI-OVK
          FROM =OVK-EXTRACT-VIN
         WHERE VIN_PRODUKT_NR    = :VIN-PRODUKT-NR    OF WS-OVK-EXTRACT-VIN
           AND REK_NR            = :REK-NR            OF WS-OVK-EXTRACT-VIN
           AND INTERN_REK_VOLGNR = :INTERN-REK-VOLGNR OF WS-OVK-EXTRACT-VIN
           AND MULTI_OVK_NR      > 0
         STABLE ACCESS
       END-EXEC

       EXEC SQL
        CONTROL QUERY INTERACTIVE ACCESS OFF
       END-EXEC

 *----------------------------------------------------------------------
  S090-COUNT-REK-BBO SECTION.
 *----------------------------------------------------------------------
       EXEC SQL
        SELECT COUNT(*)
          INTO :WS-COUNT-BBO
          FROM =OVK-EXTRACT-VIN
         WHERE VIN_PRODUKT_NR    = :VIN-PRODUKT-NR    OF WS-OVK-EXTRACT-VIN
           AND REK_NR            = :REK-NR            OF WS-OVK-EXTRACT-VIN
           AND INTERN_REK_VOLGNR = :INTERN-REK-VOLGNR OF WS-OVK-EXTRACT-VIN
           AND MULTI_OVK_NR      = 0
         STABLE ACCESS
       END-EXEC

BEWARE

Never use CONTROL TABLE <tablename> without any options. This results in returning all control options for that table to the default values. Are you sure you want to do that?

Correlation names

The CONTROL TABLE control-statement can be used with a correlation name. The control-statement only affects queries where that correlation name is used.

Example (OEV is the correlation name):
       EXEC SQL
        CONTROL TABLE =OVK-EXTRACT-VIN AS OEV
                ACCESS PATH INDEX =OVK-EXTRACT-VIN-I2
       END-EXEC.

       EXEC SQL
        DECLARE A_CURSOR CURSOR FOR
        SELECT …
         FROM =OVK-EXTRACT-VIN OEV
         WHERE …
         REPEATABLE ACCESS IN EXCLUSIVE MODE
         FOR UPDATE OF …
       END-EXEC.

Extract from explain:
Operation 1.0  : Scan
    Table             : \ZT1.$DATA.OK13WDB.OVKVIN
                        with correlation name OEV
    Access type       : Record locks, repeatable access
    Lock mode         : Chosen by the system
    Column processing : Requires retrieval of 8 out of 71 columns

      Access path 1     : Alternate \ZT1.$DATA.OK13WDB.OVKVINI2, path forced
      SBB for reads     : Virtual
      Begin key pred.   : REK_NR = :REK-NR         OF WS-OVK-EXTRACT-VIN
      End key pred.     : REK_NR = :REK-NR         OF WS-OVK-EXTRACT-VIN

NSSQL Tips #4

The following statements are forbidden because they result in poor maintainability.

·         SELECT * FROM <table>
Using SELECT * also may result in a less efficient access-padh. Using VSBB or Index-Only access may not be chosen.

Specify all needed columns:
SELECT (col1, col2)

·         INSERT INTO <table> VALUES (valcol1, valcol2)

Specify all columns. (Columns with default-values may be left out).

INSERT INTO <table> (col1, col2) VALUES (valcol1, valcol2)

The following statement is forbidden because it often gives unexpected results.

·         CONTROL TABLE <tabel>
Rather use a control-statement with the needed option and the correct (default) value.

Tuesday, November 24, 2015

Increasing the TMF Lock Limit

Few options for Increasing the TMF Lock limit


Option 1
Put the alternate key files on other volumes. Since the locks are counted by volume and not by file this may solve the issue.

Option 2
Increase the maximum lock limit by use of SCF application.
You can control the number of locks that a transaction can hold on a specific volume and lock escalation by using SCF to change MaxLocksPerTCB to a maximum of 100,000. Use this command:
More information can be found in the attached manual.

SCF ALTER $volume, MAXLOCKSPERTCB n

2-> info $data4,detail
STORAGE - Detailed Info Magnetic DISK \ETDEV1.$DATA4
Common Disk Configuration Information:
*BackupCpu............................. 1
*HighPin............................... ON
*PrimaryCpu............................ 0
*Program............................... $SYSTEM.SYSTEM.TSYSDP2
*StartState............................ STARTED

Disk Type Specific Information:
*AuditTrailBuffer/SQLMXBuffer (MB)..... 0
*AutoRevive............................ OFF
*AutoSelect............................ n/a
*AutoStart............................. ON
*CapacityMismatch...................... OFF
*CBPoolLen............................. 1000
*FastBulkWrite......................... OFF
*FSTCaching............................ OFF
*FullCheckpoints....................... ENABLED
*HaltOnError........................... 1
*LKIDLongPoolLen....................... 8
*LKTableSpaceLen....................... 15
*MaxLocksPerOCB........................ 5000 (This is it!!!)
*MaxLocksPerTCB........................ 5000 (This is it!!!)
*NonAuditedInsert...................... OFF
*NumDiskProcesses...................... 4
*OSSCaching............................ ON
*PhysvolSelect......................... n/a

*Pool.................................. None

Wednesday, November 18, 2015

Industry Standard Definitions - On Availablity

High Availability (HA)– 
     The attribute of a system to provide service during defined periods, at acceptable or agreed upon levels and masks UNPLANNED OUTAGES from end-users. It employs Fault Tolerance; Automated Failure Detection, Recovery, Bypass Reconfiguration, Testing, Problem and Change Management

Continuous Operations (CO)
     Attribute of a system to continuously operate and mask PLANNED OUTAGES from end-users.  It employs Non-disruptive hardware and software changes, non-disruptive configuration, software coexistence .


Continuous Availability (CA)
     Attribute of a system to deliver non disruptive service to the end user 7 days a week, 24 HOURS A DAY (there are no planned or unplanned outages). It includes the ability to recover from a site disaster by switching computing to a second site.

Wednesday, November 11, 2015

NSSQL TIPS #3

Avoid sorting

Avoid sorting of large amounts of data. It takes time and has a negative influence on performance.
Sorting may be used in the following cases:
-       ORDER BY
-       GROUP BY
-       DISTINCT
-       UNION (not with UNION ALL)
-       MERGE JOIN


Avoid re-opening a cursor with a large sort in its access-path. If such a cursor is closed, e.g. due to a COMMIT, then reopening the cursor to continue processing will result in repeating of the sort. This is killing for performance. If possible open the cursor with BROWSE ACCESS outside a TMF transaction.

NSSQL TIPS #2

Functions in search-conditions

Do not use functions on table-columns in a search-condition. This may result in a scan because the optimizer will not consider any available index on this column.
Rather use a function on the host-variable.

Example:
WHERE JULIANSTIMESTAMP(datum_kolom) = :hostvariabele
Results in a full table scan

Alternative:

WHERE datum_kolom = converttimestamp(:hostvariabele)

Wednesday, November 04, 2015

CODECOVERAGE

•Measure quality of testing;

•What lines of code has been hit during testing

•Specify CODECOV option at compile time

•Specify -l pgo DLL at link time

•Perform GAT/PAT testing (measure)

•Run HTML reporter on PC




Tuesday, November 03, 2015

NQSQL TIPS


Select unknown values only




For performance reasons it is better to select as few columns as possible. The optimizer may choose techniques like Index-only access. Besides it makes the query easier to read and maintain.
In the following query selects five columns of table FOTB100. The first two columns are also used in the where-clause with “=”-predicates. So the values of these columns are no surprise and therefore it is not necessary to select these columns. So leave them out of the column-list.
 SELECT
        REK_NR
      , PE_TX_REF
      , ST_TS
      , ST
      , PRI
 INTO
       :REK-NR          OF FINTXSTH
     , :PE-TX-REF     OF FINTXSTH
     , :ST-TS              OF FINTXSTH
     , :ST                   OF FINTXSTH
     , :PRI                  OF FINTXSTH
FROM  FOTB100
 WHERE  REK_NR     = :REK-NR    OF FINTXST
   AND  PE_TX_REF  = :PE-TX-REF OF FINTXST



Monday, November 02, 2015

Unicode - UTF 8, 16 and 32 Details

  • UTF8 is variable 1 to 4 bytes.
  • UTF16 is variable 2 or 4 bytes.
  • UTF32 is fixed 4 bytes.
In brief, UTF32 uses 32-bit values for each character. That allows them to use a fixed-width code for every character.
UTF16 uses 16-bit by default, but that only gives you 65k possible characters, which is nowhere near enough for the full Unicode set. So some characters use pairs of 16-bit values.
And UTF8 uses 8-bit values by default, which means that the 127 first values are fixed-width single-byte characters. (the most significant bit is used to signify that this is the start of a multi-byte sequence, leaving 7 bits for the actual character value) All other characters are encoded as sequences of up to 4 bytes (if memory serves).
Unicode is a standard and about UTF-x you can think as a technical implementation for some practical purposes:
  • UTF-8 - "size optimized": best suited for Latin character based data (or ASCII), it takes only 1 byte per character but the size grows accordingly symbol variety (and in worst case could grow up to 6 bytes per character)
  • UTF-16 - "balance": it takes minimum 2 bytes per character which is enough for existing set of the mainstream languages with having fixed size on it to ease character handling (but size is still variable and can grow up to 4 bytes per character)
  • UTF-32 - "performance": allows using of simple algorithms as result of fixed size characters (4 bytes) but with memory disadvantage

Friday, October 30, 2015

What is INSPECT?

Inspect is an interactive symbolic debugger that enables you to isolate program bugs quickly.

Few High-Level Inspect Commands which are specific to 100 coded object files are in detail below.

IDENTIFIER
Displays information about the internal characteristics of a given data location or of all data locations in one or more scope units. The IDENTIFIER command is a synonym for the INFO IDENTIFIER command.

INFO IDENTIFIER
The INFO IDENTIFIER command displays information about the internal characteristics of a given data location or of all data locations in one or more scope units, including native symbols.

Usage Considerations
  • The INFO IDENTIFIER command can be used for active and for inactive scope units in processes, PATHWAY servers, and save files. In a PATHWAY requester program, the INFO IDENTIFIER command can be used only for active scope units.
  • Types of Entities - The form of the output produced by INFO IDENTIFIER for a given identifier depends on the type of entity the identifier denotes.
  • INFO IDENTIFIER Presentation for Variables - A typical program contains more identifiers for variables than for any other entity class.
MATCH
The MATCH command searches for scope-unit names or other identifiers in the current program.

Default Values
  • If the MATCH IDENTIFIER command is entered with only one pattern, Inspect will display only the matching symbols, and the scopes they were found in.
  • If the VERBOSE clause is omitted, scope names are displayed only if they have a matching pattern.
Usage Considerations

  • Matching Uppercase and Lowercase Letters - The MATCH command does not distinguish between uppercase and lowercase letters in the pattern unless the current language is C.
  • Alias Restrictions with the MATCH command - Aliases are not be expanded in the pattern with the MATCH IDENTIFIER or the MATCH SCOPE commands.
  • Matching Scopes With a Leading “#” - You must include a “#” to match scopes which have a leading “#”. For example, to match the scope, “#global”, enter: MATCH SCOPE ##global

Saturday, October 10, 2015

Different Object File and Native Processor Types



Object File Code Object Type Native Processor Types
100 TNS CISC (Complete Instruction Set Computing)
for the Tandem Processor.
- TNS objects (COBOL, TAL, etc) will be interpreted on Integrity Platforms
700 TNS/R RISC(Reduced Instruction Set Computing)
for the MIPS Processor
- TNS/R Native object files cannot be executed on a TNS/E Platform.
All TNS/R  Native Programs must be recompiled using a TNS/E Native Compiler(for Native 800 Object)
OR
the TNS Compiler(for interpreted 100 Object)
800 TNS/E EPIC(Explicitly Parallel Instruction Computing) for the Intel Itanium Processor
- TNS/E Native Object Files cannot be executed on a TNS/R Platform.
- OSS Programs must be migrated to TNS/E Platform
- All TNS/E Native Code is position independent code (PIC)

Corporates today depend on HP Integrity NonStop mission-critical servers

Modern HP Integrity NonStop Servers continue to deliver on hallmark attributes -
   - Availability,
  - Scalability and
  - Data Integrity

The HP NonStop Server continues to be the industry leader in mission-critical availability and reliability, with an extensive footprint in financial services and telecommunications marketplaces, and the customers who deploy NonStop have been spared the ignominy that can come with unwanted media coverage.

Few recent wins.

- In America, PULSE selects four NB54000c for Credit Scoring

- In India, Mobile phone operator depends on 300 NonStop processors

- In Norway, Healthcare is operating with NonStop

- In Singapore, Shippers move containers NonStop

- In Germany, auto manufacturer never unseated

HP NSS Vs HP NSB(Blade Servers)


For Migration from COBOL to Java - POC Details

HP and Blu Age® completed a multi-step migration and modernization Proof of Concept (POC) project that took a legacy OLTP application from COBOL and DB2 on an IBM mainframe,

 - First to Java and DB2 on the HP ProLiant DL360 platform, &
 - Ultimately to Java and SQL/MX on an HP NonStop Server.


The project included comparisons of transaction execution times in the

    - ProLiant environment,
    - a hybrid NonStop and
    - Windows environment, and
    - a “pure NonStop” environment.

The NonStop Server was clearly the overall winner for the set of transactions comprising this POC.

The Blu Age1 software suite implements
- Model Driven Architecture (MDA) principles &
- Offers appropriate methods to deliver Service Oriented Architecture (SOA) application programs written in Java.

The Reverse Modeling component facilitates the extraction of business logic from the legacy code and constructs UML (Unified Modeling Language) models that can be used to generate the application in a Java framework.

The Blu Age software is packaged as a plugin for Eclipse, an open source IDE (integrated development environment) frequently used to develop Java applications.

POC configuration

Configuring the hardware and software for this POC required no extraordinary efforts.

Standard 1 GB Ethernet connections were used to connect these hardware components used for testing:

• NonStop Server: NS16004, with 10 mirrored data drives, running H06.25, SQL/MX 3.2, and NonStop Java 6.0 (1.6).

• Windows server (for Java middle layer): HP ProLiant DL360 G5, running Windows Server 2008, Java 1.6 and Blu Age 3.7.

The middleware setup required installing JBoss 7.1.1 and its dependencies.

A few minor (and common) DDL adjustments were made when the NonStop SQL/MX tables were created:

- VARCHAR columns with lengths less than 20 were changed to fixed CHAR columns of the same length. (This is a typical optimization practice for NonStop SQL.)

- BIGINT columns were changed to the equivalent NonStop SQL/MX data type, LARGEINT.

- Columns declared with DEFAULT NOT NULL syntax were given a default value of 0.

Thursday, October 08, 2015

Availablity , Statistics and Performance (ASAP)

ASAP Monitors hundreds of Objects in Real-Time.

It Provides goals and actions to repair or restart system and Application Components.

ASAP has a wide range of visual and programmatic interfaces,
       - Including EMS,
       - Conversational interface,
       - Fat and thin client interfaces,
       - Provider/Consumer APIs
       - Email
       - Phone Notifications with color-coded HTML reports
       - Customizable GUIs with user-defined Dashboards and
Long term Historical Database with up to eve 10 years worth History of all Availablity,
statistics and performance data.

ASAP Can integrate with HP operations manager.

ASAPX in an API into the ASAP infrastructure to monitor the availablity and performance ob abstract application domains.

ASAP Hybrid plug-in for Linux integrates Linux Application metrics into ASAP, providing a visa of the entire application spanning across Nonstop and Linux systems.
       

Tuesday, October 06, 2015

Checklist - To migrate into OSS - VI - And Thats it :)

After System and Database updates, Next step would be Application Installation.


ID Task Description Prior
Task
Resource Remarks
24 Place Application in Dir. Structure 11,19 Applications Expand the Application Delivery Package and Place into the proper delivery structure. This often requires utilities like TAR,GZIP and/or Jar.
25 Configure Pathway and Serverclass 24 Applications Configure Pathmon/Pathway Naming, Locations and server classes(including environment variables) - If needed
26 Deploy .war files for web interfaces 25 WebAdmin Deploy distributed .war files
27 Configure Externalized properties for web interface(s) 26 WebAdmin Modify Externalized properties for IP addresses, catalogs, schemas and user-id.





ID Task Description Prior
Task
Resource Remarks
28 Configure Online Dumps 21 Database,
Systems,
Operations
Use TMFCOM to configure online dump schedules for database protection
29 Configure Scheduled Backups 27 Database,
Application,
Operations,
Systems
Determine backup fileset, schedule and methods;
Implement in batch scheduler.(Each OSS pathname has a corresponding Guardian Filename)
30 Identify Logs to Monitor 27 Database,
Applications,
Operations,
Systems,
Interface Group,
WebAdmin
Identify Logs to Monitored across the enterprise.

Ported Applications will NOT typically use VHS or EMS.

Monday, October 05, 2015

Checklist - To Migrate into OSS - V

When the System is READY, the next steps is for DBA Team to prepare and execute DDL Scripts and prepare database for use.


ID Task Description Prior
Task
Resource Remarks
20 Modify DB scripts for Disk Layout 3 Database Edit SQL DDL Scripts to distribute the database.
21 Create the Application Schemas and Tables 20.21 Database Obey the scripts in MXCI to create the Application schemas and Tables
22 Create MetaData (catalog) convenience views{optional} 21 Database Download and run in OSS the mxschema utility to create views of database metadata tables
23 Create datasource for J/ODBC access 18,21 Database Use mxcs or NSM/Web to create a data source.

Checklist - To Migrate into OSS - IV



ID Task Description Prior
Task
Resource Remarks
16 Create Resolver Links for TCP/IP 15 Systems Edit, In Guardian Space,

$system.ZTCPIPHOSTS & RESCONF, In OSS, create symbolic links for HOSTS, RESOLV.CONF, PROTOCOLS, NETWORK and SErvices in the /etc directory
17 Create Data service 16 Systems Use SCF to add a Data Service
18 Add "DBA.manager" to MXCS 17 Systems Use MXCS mode of MXCI to grant OPERATOR privileges to the dba.manager user.
19 Install any open source utilities 18 Systems Identify the useful utilities and install those.
Modify the /etc/profile and users .profile files.

Checklist - To Migrate into OSS - III



ID Task Description Prior
Task
Resource Remarks
11 Create Application Directories 10 Systems Establish OSS directory for Application like APPL, BIN etc
12 Install SQLMX 11 Systems
13 Install Java 12 Systems
14 Install Java-addons and other software required for Application 13 System Following Components needed
- J/ODBC drivers
- JToolKit- Contains APIs
- Javaservlets - Required for WebApps
- HP Jmeter for Nonstop Java - Performance Anlayse Tool
15 Initialize JDBC 14 Systems

Checklist - To Migrate into OSS - Part II



ID Task Description Prior
Task
Resource Remarks
6 Move QIO to KSEG2
(Required for Java)
3 Systems Use SCF to move QIO to KSEG2. And Reboot.
7 Configure TMF 6 Systems Use TMF to configure TMF components and Objects: Audit Trails, Dump and restore volumes etc
8 Create Mount Point Dir 8 Systems Use OSS to establish mount point directory
9 Create Disk Pool 9 System Create Disk pool Definition Files / Scripts
10 Start the Filesets 10 Systems Use SCF to add and start a name server If needed. Then, add and start a fileset for the new mount point.

Checklist - To Migrate into OSS - Part I

Sunday, October 04, 2015

Tokenization : Creating Secure Test Data

PCI DSS Section 6.4.3 requires that "Production Data (live PANs) are NOT used for Testing or Development".

If we need a Huge Dataset of real looking PANs for a Test. How do we create those?

A good Tokenization will allow you to do "BATCH COPY" where you start with PAN Data and end up with Tokens.

Doing this in production once is a fast and secure way to create test data: when configured properly, the tokens can look like "REAL" PANs and can even pass Luhn check.

Thus creating a huge set of Test data which is NOT tied to real PAN data at all becomes easy exercise.


Challenges of being a Tandem DBA

  • Repartition
    • Reviewing and updating key values
    • Merging Partitions
  • Reuse Partitions
  • Update statistics
    • Gathering SQL/MX statistics
    • Copy Statistics from One Table (or system) to Another
  • Index maintenance
    • Create
    • Populate
    • Drop
    • Alter
  • Specifying Access Previleges
    • Grant
    • Revoke
  • Query Performance Tuning
  • Building and Deploying Standard Queries
  • Dataload/Unload
    • Moving Data from MP to MX for Testing

Friday, September 18, 2015

NSSQL : Varchar columns

Varchar in COBOL

Invoke of a varchar in COBOL gives the following result:_

invoke testtab format cobol;

* Record Definition for table

* Definition current at 16:08:08 - 09/06/01

01 TESTTAB.
    02 NAAM PIC X(4).
    02 LANG.
             03 LEN PIC S9(4) COMP.
             03 VAL PIC X(255).__

Assigning a value to a VARCHAR column in COBOL

On insert, the length-field of the VARCHAR-columns must be specified by the program as well
as the value-field.

The length-field will be filled automatically when the TRIM-function is used._

Example:_
INSERT INTO =EMS-LG
(
. . .
, TXT
. . .
)
VALUES
(
. . .
, TRIM(:VAL OF TXT OF WS-EMS-LG)
. . .
) _

Note: when the source-string is nullable, an indicator-field has to be used as well._
Example_
TRIM(TRAILING " " FROM ( :VAL OF TGV-REK-NR OF table_
INDICATOR :INDICATOR OF TGV-REK-NR OF table))

Assigning a value to a VARCHAR column in SQLCI

If you want to assign a value to a varchar-column with a string that is longer than 80
characters using SQLCI, then use the following syntax:_

insert into =test-tabel values ("test",
"1234567890123456789012345678901234567890123456789012345678901234567
890" ||
"1234567890123456789012345678901234567890123456789012345678901234567
890" ||
"1234567890123456789012345678901234567890123456789012345678901234567
890" ||
"123456789012345678901234567890123456789012345");_
--- 1 row(s) inserted._

Select in SQLCI

Selecting a varchar column in SQLCI only shows the first 80 characters._

select long from testtab;_

LONG
-------------------------------------------------------------------------------------------------_
123456789012345678901234567890123456789012345678901234567890123456789012345
67890_
--- 1 row(s) selected._

If you are interested in all characters, use the report writer command SET STYLE. Change the
style-option VARCHAR_WIDTH. Maximum is 255._

set varchar_width 255;

select langejan from testtab;_

LANGEJAN_
--------------------------------------------------------------------------------_
--------------------------------------------------------------------------------_
---------------_
123456789012345678901234567890123456789012345678901234567890123456789012345
67890
123456789012345678901234567890123456789012345678901234567890123456789012345
67890
123456789012345678901234567890123456789012345678901234567890123456789012345
67890
123456789012345_

--- 1 row(s) selected._

For columns even longer than 255 bytes the LIST_COUNT-option of the SQLCI-session can be
used:_

set list_count 1;_

select constrainttext from =constrnt
where tablename like upshift(“%BTCHFILE%”);_
detail constrainttext as C3000.75;_

list first; (or short: l f )

(also list next / list all can be used)_

**** next = enter_
**** to cancel the SELECT when not all result rows are shown
cancel;_
set list_count all;_