Friday, 11 December 2009

1. Introduction

Recently I made a few tests at a client's site on SQL performance with large volumes of data. I found quite interesting results that I summarise below.

2. Application Description

The client is a financial company which stores large volumes of data for a Basel application which provides a month by month situation of the contracts. The data are used for some standard reporting and also for non standard OLAP inquiries requested especially by Credit control users

The application updates 7 different physical files which are joined together in a join LF called BASUNICO1L which has a total record length of 1789 characters. The main keys of this file are the processing period (YYYYMM) and the contract number

3. Test Environment

The tests were made in a test environment where the files contained the data of about 6 months. The total number of records for the join LF were approximately 11 million and those of the period used for the test were about 2 millions.

The first set of tests were made to test the times needed to copy all records of one period from the join logical file to a physical file with the same record layout. The record were written by using three different approaches as follows:

A simple copyfile (CPYF) which includes a selection such as INCREL(*IF PERIOD *EQ 200806)
A traditional (file oriented) Cobol program (TSTFIL1) which used a START to position the file pointer to the first record of the period and then entered into a loop of READ and WRITE operations to write to the output file all records of the requested period.
A Cobol program with embedded SQL (TSTSQL1) which wrote the output records with a simple SQL Insert of the selected records.

A Cobol program with embedded SQL (TSTSQL3) which created the output file as an SQL MQT table. A second set of test was used instead to test the times needed to read some fields of the files by using different approaches as follows:

Traditional file oriented Cobol program (TSTFIL5) which performed a START to position the cursor and then used sequential READ operations of all records of the selected period. Cobol program with embedded SQL (TSTSQL5) which included an SQL cursor to read all lines of the selected period. I decided to perform the tests in the following different conditions:

- In an environment without additional indexes (i.e only the access paths of the files)
- In an environment that could use also additional SQL indexes
- In an environment that used an SQL logical view instead of the original join logical file
- The results are described in the following points

4. Tests based only on the join logical file

The test made in the original environment to copy about 2 million records without SQL indexes were the following:

- The CPYF took 19 minutes and 35 seconds
- The traditional file oriented Cobol program took 9 minutes and 20 seconds
- The Cobol program with embedded SQL took 28 minutes

5. Tests done after the creation of SQL indexes

I checked the index advisor file QSYS2/SYSIXADV after the initial tests and I noticed that there was a record suggesting to create an index on the period field. I created both a vector index and a radix index and then started the new set of tests. The results were the following:

- The CPYF took 7 minutes and 52 seconds
- The traditional file oriented Cobol program took 7 minutes and 7 seconds
-The Cobol program with embedded SQL took 13 minutes and 20 secons
- There has been a clear benefit especially for the Cobol program with embedded SQL. By loooking at the log, I noticed that the optimiser had chosen to use the vector index during that execution.

It is interesting that if you use the commands WRKOBJ or DSPFD or DSPDBR, the indexes appear as logical files with a special SQL type attribute which has the value INDEX.

It is also interesting that the space used for the indexes is much less than what is required for a logical file. In my test environment one of the main the PF used about 500 Mega bytes, a LF used about 257 Mega bytes, the radix index used occupied about 175 Mega bytes and the radix index only 22 Mega bytes.

6. Test of the file creation with as an MQT (Materialised query table)
I tried to create the target file by using a Cobol program with embedded SQL that produced the output table as an MQT instead of the previous INSERT and I found that the time were very good

The program took just 57 seconds to produce the results.

7. Reading Test

I compared the time spent by two programs to read from all records of a selected period to sum up a total The first program was a Cobol traditional file oriented program, whereas the second one was an SQL Cobol which included an SQL cursor to read all selected records.

The first program took 13 minutes and 17 seconds to complete, whereas the second one took on the first execution just 7 minutes and 14 seconds to complete and much less in following ones.

8, Using SQL Views instead of Logical files

I tried to use an SQL view equivalent to the logical file and repeat the tests above by using the SQL view. The results were not significantly different than those based on the logical file, however I found that a view required much less space than an equivalent LF.

The SQL views appear again to the system as special logical files with the SQL type attribute containing the value VIEW. The space occupied by the view logical files is much less than that required by a logical file. In my test environment a view equivalent to a join logical file of about 405 Mega bytes. required less than 1 Mega byte.

9. Conclusions

The tests described above seem to demonstrate what follows:

The fastest option to extract the data was the Cobol program with an embedded SQL to create a MQT table.

The traditional file oriented Cobol program was faster than the corresponding Cobol program which included an embedded SQL INSERT statement. The execution time of the Cobol SQL program was significantly affected by the creation of the SQL vector index.

The traditional file oriented Cobol program that read all records of a selected period by using START and READ NEXT operations was slower than the Cobol program which read the records with an SQL cursor.

The results agree with some points of an IBM Redbook Modernizing IBM eServer iSeries Application Data Access - A Roadmap Cornerstone ( redbooks.ibm.com/abstracts/sg246393.html?Open ). where it is written that:

The SQL insert operations are slower than Cobol write statement because SQL operations include more validations than write oprrations into a DDS PF SQL does faster reads than HLL operations. The main reason is that a cursor reading an SQL table does not have the extra data cleansing code like a DDS PF reading. Using SQL views instead of logical files should allow a significant reduction of the space occupation. The final conclusion is that a wise use of SQL can bring significant improvements in applications performance.

Wednesday, 9 December 2009

Scenario: Multiple IBM i5/OS logical partitions using the HMC modem and the modem on an i5/OS logical partition

This scenario demonstrates how to configure multiple i5/OS® logical partitions to connect to service and support through the modem on the i5/OS logical partition and to enable the HMC to connect to service and support through its own modem. In the event the HMC modem is busy or unavailable, the HMC can use the modem on the i5/OS logical partition to connect to service and support. Or, in the event the i5/OS modem is busy or unavailable, it can use the modem on the HMC to connect to service and support.

Situation

If you are responsible for maintaining servers at your company, one of your roles might be to establish the connections within your network and to the service and support organization so that your servers can access service and support resources. For this scenario, you are using an HMC to manage your server, and your server is divided into multiple logical partitions running multiple operating systems. You want to enable the server to use the modem on the HMC or the modem on the server to connect to service and support. In the event the HMC modem is busy or unavailable, the HMC can alternatively use the modem on the i5/OS logical partition to connect to service and support.

Objectives

In this scenario, you want to ensure that your company's server can receive support from the service organization when requested by your company's network administrator. The objectives of this scenario are as follows:

To set up the HMC to connect to service and support
To configure the i5/OS logical partition (non-service partition) to use the modem on the i5/OS service partition to report i5/OS problems to service and support, and as a backup, to use the HMC modem
To configure the i5/OS logical partition that has the modem for dial-up connection to service and support, and as a backup, to use the HMC modem
To enable the logical partitions running Linux®, AIX®, and i5/OS (non-service partition) to use the modem on the HMC to report service information, problems related to server hardware or firmware (Licensed Internal Code), and problems related to Linux and AIX to service and support
Details

The following figure illustrates the flow of service information and problems through the service connection to service and support.

Figure 1. This diagram shows the flow of information and problems from four logical partitions and the HMC to service and support. The information and problems pass through the modem on the HMC or through the modem on the i5/OS logical partition.



The figure illustrates the following points relevant to this scenario:

The HMC setup is complete.
The server has four logical partitions with the following operating systems installed:
Linux
AIX
i5/OS
i5/OS (this is the service partition)
The PPP connection is configured on the service partition and connects to service and support.
The logical partitions use either the modem on the HMC or the modem on the service partition to connect to service and support.
Service information and problems flow from each logical partition to service and support using a modem connection, as follows:
Linux logical partition > HMC > Service and support
AIX logical partition > HMC > Service and support
i5/OS logical partition > HMC (for service information) > Service and support
i5/OS logical partition > i5/OS service partition (for i5/OS problems) > Service and support
i5/OS logical partition > HMC > Service and support (backup)
i5/OS service partition > Service and support
i5/OS service partition > HMC > Service and support (backup)
Note: If the HMC modem is busy or unavailable, the HMC and logical partitions can alternatively use the modem on the service partition to send all service information and problems to service and support.

Prerequisites and assumptions

Successful implementation of this scenario requires that all necessary hardware planning and setup tasks are complete. Additional prerequisites and assumptions are noted in the appropriate places within the configuration tasks.

Configuration steps

You must complete the following tasks:

Ensure that all prerequisites are met for your connection method. Refer to Task 3. Prerequisites.
Ensure that your physical networking is set up correctly. See Task 4. Ensure that your physical networking is set up correctly.
Obtain or verify your IBM® ID. See Task 5. Obtain or verify an IBM ID.
Set up the HMC to connect to service and support. You can use either of the following methods to set up the HMC.
Guided Setup wizard (recommended method): The Guided Setup wizard is a tool on the HMC designed to guide you through the steps of setting up the HMC, including connectivity from the HMC to service and support. Although you typically use the Guided Setup wizard when you first set up your server, you can also use it to verify that your connections from the HMC to service and support are set up correctly. For details, see Task 6. Verify the HMC service settings.
Manual setup: If you prefer to use the manual method to create your service connections from the HMC to service and support, see Manually set up the HMC to connect to service and support.
Set up and configure the logical partitions. For details, refer to the Partitioning the server topic.
Install i5/OS on your system or logical partitions. For details, refer to the Installing operating systems topic.
Configure your TCP/IP network. For instructions, refer to the operating system documentation.
Activate TCP/IP on your logical partitions. TCP/IP starts automatically, as long as the network adapter is recognized and can communicate with the network when the i5/OS operating system is started.
Note: If TCP/IP does not activate, type STRTCP at an i5/OS command line to start TCP/IP. This command initializes and activates TCP/IP processing, starts the TCP/IP interfaces, and starts the server jobs. Only TCP/IP interfaces and servers with AUTOSTART *YES are started with the STRTCP command.

Configure PPP connection from the i5/OS service partition to report i5/OS problems to service and support. See Task 11. Configure Electronic Service Agent for i5/OS.
Configure PPP connections from the i5/OS logical partitions to use the modem on the service partition to report i5/OS problems to service and support. See Task 11. Configure Electronic Service Agent for i5/OS.
Make sure that the HMC modem is set up to allow use by the logical partitions.
Register the IBM ID for i5/OS. See Task 12. Register the IBM ID for i5/OS.
Verify that the connection to service and support is set up correctly and that information is being transmitted correctly. See Task 13. Test the connection to service and support.
View information that was shared with IBM. See Task 14. View the server information that was reported to IBM.
Parent topic: Scenarios: IBM i5/OS
Document Title
IBM® iSeries™ Access for Windows PC5250 Configuration for HMC Remote Console Connections

Document Description
This document provides detailed instructions for configuring the 5250 emulator included with IBM® iSeries™ Access for Windows for a HMC remote console connection. This configuration is for a non-ssl connection to the HMC. The configuration does not apply to other emulators such as the IBM Personal Communications product. For further information, refer to the IBM® eServer™ Information Center topic Connecting to a 5250 console remotely at the following Web site:

http://ping.fm/UlDGl

Step 1: Verify iSeries Access for Windows Code Level

The iSeries Access for Windows emulator being used must be at Version 5 Release 3 Service level SI13587 or later. To verify the service pack level, select Start > Programs > IBM iSeries Access for Windows > iSeries Access for Windows Properties.




Step 2: Configure the PC5250 Remote Console Session

1 Select Start > Programs > IBM iSeries Access for Windows > Emulator > Start or Configure Session.
2 From the IBM Personal Communications - Session Manager dialog that appears, press the New Session... button.
3 In the Configure PC5250 dialog:
a Update the System Name to the HMC host name or TCP/IP address.
b Set the port number to 2300.
c Then press the Properties button.


4 The properties button will launch the Connection dialog shown below.
a Set the User ID sign on information to Use default User ID, prompt as needed.
b Set the User ID to Q#HMC.
c Set the Security to Not secured.
d Click OK, OK.


5 Save the profile.

To save the workstation profile configuration for future use, click the Menu option, File then Save. Enter a profile name and click OK. The workstation save creates two files. Both file names are the same as the profile name with extensions of .ws and .cae.

Note: Do not move or copy only the workstation profile file (extension .ws). Moving only this file will result in the loss of the connection information, which causes a CWBCO1048 connection error. When possible, create a shortcut to the profile rather than a copy. If the profile must be moved or copied, copy both files to the new location.
Step 3: Verify the HMC Firewall Configuration

The firewall must be enabled for remote console, regardless of the type of emulator used. This information is not specific to the iSeries Access for Windows product.

1 In the navigation area, expand the HMC you want to work with. HMCs are listed by host name or TCP/IP address.
2 Expand HMC Management.
3 Click HMC Configuration.
4 In the contents pane, click Customize network settings.
5 Click the LAN Adapters tab.
6 Select the LAN adapter you want to work with and click Details.
7 Click the Firewall tab.
8 Select the 5250 application in the top table. Click Allow Incoming to allow all TCP/IP addresses or click Allow Incoming by IP Address to allow only specific addresses. Click OK. 5250 should now display in the Allowed Hosts table.

Friday, 4 December 2009

If you really believe that your backups are sound, would you be comforable erasing everything on your hard drive right now, and restoring it from backups?

If not, why not?

No matter how sophisticated or comprehensive your backup system is, you will never know if it works unless you actually test it. Without testing, you can have no confidence at all. Here are just some things that can go wrong with backups through no direct fault of your own:

Failed backup software: The backup software simply fails and when you restore the data from the backup media, many files are missing or corrupted. It's not good enough to rely on the backup software to tell you that it has securely written a tape. In a real and recent case, involving one of the most popular backup products on the market, a tape could not be properly reloaded, even though it was written with the "verify" option turned on.

Incomplete data: You discover that someone in a hurry to do a backup one day configured the backup software to backup just their directory, and never got around to resetting the configuration. So your twenty-seven backup tapes all consist of a copy of just one directory.

The magnet: You store your backups in a safe-deposit box in a bank. However, someone has stored their very precious collection of rare magnets in the box next to yours.

Encryption: Back in the dark ages when the backup software was installed and configured, someone set it up to write encrypted backup tapes. Now nobody knows what the password is.

Old version: You find you can't reload an old backup tape because it can only be read by an earlier version of your backup software which you recently upgraded.

Poor quality media: You discover that the media you're using for your backups is of poor quality and you can't get the data off it after six months.

Compression: The new compression feature in the latest version of your backup software turns out to compress better than anyone thought...

Upgrade: Your backups appear to be working. You even tested one once and it worked. But you upgrade your operating system and a bug in the backup software manifests itself, causing the backup to fail.

Tape breakage: Your backup tape gets broken during the backup and you send it offsite without noticing.

Backup crash: You set a backup going one night. The next morning you come in and find the tape on your desk. A friendly work collegue has taken it out of the drive and left it for you. What they didn't tell you was that they found the machine rebooted. The machine crashed during the backup and the tape drive automatically rewound the tape.

These are just some of the things that can go wrong. The important thing is to realize that there is a huge number of things that can go wrong with a backup operation. Some of those things will be intermittent. Others will be systemic, meaning that all your backups will be useless.

The only way to tell if your backups are working is to actually load a backup tape and see if the data is correctly restored. This may be difficult or impossible for many users who have filled their hard disk to capacity. However, testing your backups is such an important activity that it really is worth either reducing your disk usage to less than half of your hard disk, or finding another hard disk, so that you can perform a test restore.

If you feel that such testing is unnecessary, ask yourself if you are ready to face the backup challenge: would you feel comfortable erasing your hard disk right now, and restoring it from your backups? If not, then think again.
Programming modernisation RDi

Wanting to know just how it feels to use the new modern i5 rational developer tools, but not sure a) how to set it up b) Where to start c) What it can really do for you?

Are you confused by the flood of information and powerpoint presentations provided by IBM on this?

Why not save yourself the trouble of installing and trying it by using our on-line service, using real tools, properly installed connected to an i5 and be free to try any code you wish?

Its easy with our RDi sampler service, you can look at our demo source code an try a proper windows environment with these tools, right now

We can provide access for a day to a few months, we can also provide training and trouble shooting to ease you through the process


Why choose Mid-Blue?

We have proven and extensive knowledge on all aspects of IBM mid-range hardware and software, we are the company people to turn to for no nonsense advice and help when they need it most. 01322 407000

Wednesday, 2 December 2009

Tuesday, 1 December 2009

do you think your backup tape works then?
Cool Tape Tester should be working very soon, i have high hope for this www.tapetester.com

Monday, 30 November 2009

Thin Client Computing on the iSeries

Corporate America has been looking for ways to reduce the cost of ownership associated with Microsoft Windows®-based computing causing the popularity of Windows-based Thin Clients coupled with deployment of Terminal Server and Citrix grow at a rapid rate. A new trend has been developing; enterprises are now trying to reduce costs further by moving to “intelligent” Thin Clients.

In contrast to “dumb” thin clients, which simply acted as terminals allowing access from desktops to server-based applications, “intelligent” Thin Clients come with built-in emulation software; e-mail; a full-function browser—such as Mozilla Firefox or full Internet Explorer—supporting JVM, Flash and XML; and pop-up window support. These intelligent Thin Clients are served up with Linux or Windows XPe operating systems, and allow access to a file server as well as Web-delivered applications such as Lotus Notes, WebSphere, etc.

Several options now exist for companies wishing to deploy Microsoft Office-type thin client products:
Traditional: Serve up these applications using Microsoft Terminal Server (2000 or 2003 server), Citrix, etc., delivered using a remote desktop protocol. Costs associated with this method are: Terminal Server Client Access License at $120; Client Access License at $30; Microsoft Office Professional at $395; and Citrix (if applicable) at $250. The total software cost per user is $545 to $795. The total cost of ownership for software and hardware (Windows CE Thin Client) in a LAN environment is $994 and in a WAN environment is $1,194.

Hybrid: Keep Microsoft Office and serve it up using a Linux server and CrossOver Office. This product allows some Microsoft products to be installed and served up via X-Windows to a Linux desktop or Linux Thin Client. The costs for this method are: CrossOver Office at $50 and Microsoft Office Professional for $395. The total software cost per user is $445. The total for software and hardware (Linux Thin Client) in a LAN environment is $895 and in a WAN environment is $995.
Local Linux: Install OpenOffice on a Linux Server and use X-Windows on a Linux desktop or Linux Thin Client. The cost of this method is: X-Windows at no cost and OpenOffice at no cost. The total software cost per user is $ 0. The total cost of ownership for software and hardware (Linux Thin Client ) in a LAN environment is $450.

Versatile Linux: Purchase a Thin Client with embedded OpenOffice. Files are accessed / stored via a Linux or Windows file server. This option incurs an additional cost for a hardware upgrade since it requires more disk space and more memory. It is best suited to remote users, as X-Windows can be bandwidth-intensive and is therefore not recommended for WAN use. Cost of this method: OpenOffice at no cost and additional memory for $125. The total software cost per user is $125. The total cost software and hardware Linux Thin Client) ( in a LAN/WAN environment is $575.
Remote Linux: Install OpenOffice on a Linux Server and serve up applications using the Tarantella Enterprise 3 product. This option supports compression and encryption of the X-Windows protocol, allowing applications to be served up efficiently on a WAN. The cost of this method is: X-Windows at no cost; OpenOffice at no cost and Tarantella Enterprise 3 at $100. The total software cost per user is $100. The total for software and hardware (Linux Thin Client) in a WAN environment is $550.

Corporate management has come to realize that the PC revolution has its drawbacks. These drawbacks include higher maintenance costs, lower employee performance, and system vulnerability. The hardware become obsolete and higher performance CPUs are required. Operating systems typically need updating every three years. When you figure in the licensing fees; the time spent by system administrators updating each workstation, installing and maintaining anti-virus software; the downtime this causes other employees; and the additional time required by employees to learn each new revision; the resulting price tag is quite sizeable!

There is currently a growing trend for Thin Clients using embedded emulation alongside server-based or Web-delivered applications. This trend has generated new interest in terminal computing, specifically in the field of thin clients. Linux Thin Clients in particular allow server-based computing without the additional licensing and provide file server access enabling complete independence for applications, whilst maintaining sharing and back-up control.

For a longer version of this article go to http://www.bosanova.net/thinclientbill.html

Martin Pladgeman is President of BOSaNOVA, Inc., a leading developer of security solutions, enterprise-class thin clients and iSeries connectivity solutions. BOSaNOVA thin clients provides a choice between CE.Net, LINUX, and XP Embedded and offers a host of unique features. For more information, visit us online at http://www.bosanova.net or info@bosanova.net.

Article Source: http://EzineArticles.com/?expert=Martin_Pladgeman

Wednesday, 25 November 2009

IBM Up to 70% reduced memory pricing makes it even easier to cut operational costs with Power Systems

IBM Up to 70% reduced memory pricing makes it even easier to cut operational costs with Power Systems
In today’s economy IT managers often feel as if they’re walking a tightrope. Take one step: manage costs and maximize ROI from IT investments. Take the next step: keep critical business applications performing at the highest levels possible. With each step try to strike just the right balance between cost and performance in the data center. A proven strategy for achieving this goal is virtualized system consolidation. Consolidation can enable immediate cost saving by reducing energy requirements, saving necessary floor space, reducing operational and administrative cost. It can also help deliver optimized workload performance.
Power = ROI – sprawl
IBM Power Systems virtualization solutions offer proven capabilities to support the most efficient use of computing resources while still providing the application performance your customers, employees and partners demand. As virtualization use increases, the demand for memory increases as well. As customers consolidate more workloads from SPARC, Itanium and x86 onto Power Systems, the long-term benefit of these consolidation solutions has become clear and companies like yours need the bottom line results. Your procurement officer is looking at acquisition cost. Your CFO is concerned about time to return on investment. IBM has heard these concerns and is making a major move to ensure companies find it even more attractive to consolidate on Power Systems.
Beginning November 17, 2009 in the US and Canada, and by November 24, 2009 in most other countries, IBM is lowering the price of Power Systems memory features by up to 70%, helping to lower the hardware cost of a powerful and more efficient virtualization solution by up to 41%. Consider as an example a current Sun customer who would like to consolidate 25 Sun Fire V490 and V890 servers with 128 cores running database applications and another 128 cores running other applications. That customer could consolidate onto two IBM Power 550 Express servers. Another option would be to consolidate on 8 HP blade servers, but that would have a cost of $1,000,000 more in the first year alone.1 This isn’t a promotion, but a permanent price reduction. IBM is committed to offering lower total cost of acquisition to help you accelerate a return on your investment — and realize even greater savings from virtualization and consolidation on Power Systems.
IBM Power Systems virtualization solutions built on PowerVM offer proven capabilities to support the most efficient use of computing resources while still providing the application performance your customers, employees and partners demand. Virtualization and consolidation have become key tools in the cost/performance IT equation. By consolidating smaller systems onto larger, more powerful systems, you can save floor space, energy costs and administration effort.
Now with IBM Power Systems virtualization solutions, you get:
Uncompromised virtualization performance with PowerVM
Proven, reliable IBM technology built on POWER processors
High availability and disaster recovery with PowerHA SystemMirror
Industry leading virtualization management capabilities with IBM Systems Director VMControl
Legendary IBM service and support
And you get it all at a lower cost than ever before. In 2009, hundreds of customers have switched from Sun or HP to IBM Power Systems for their critical business applications. To find out more contact your IBM representative, Business Partner, Mid-Blue International Ltd on 01322 407000

Monday, 23 November 2009

IBM Linux Advert

IBM eServer iSeries - The Laughing Boardroom

Simple Step to Create Disaster Recovery Plan

Nowadays preparing Disaster Recovery Plan (DRP) template for your company is quite easy. In internet there is a lot of free template that could be used to meet your need. Below some tips to help you creating effective DRP template

Choose the generic template first.

The generic template that can be used in many industries is the first place to go. You can find a lot of idea and concept from that template.

Avoid to technical term in DRP Document

Many people trapped when design DRP document which to technical to be read. It should be understood that DRP should be read in emergency situation where panic and high tension happen.

Test, Test and Test

The DRP document should be tested regularly and updated with the latest situation

After having the idea how to develop DRP, then the next step is to find some templates for DRP document. Here is some basic free template that you can choose, more list of free template can be found at many websites such as securityprocedure.com

1. Disaster Recovery Plan from TechRepublic

TechRepublic, an online computer forum provide free 23 pages template of DRP, this template could be replaced with your own scenario by replacing client 1 and client 2 name at the document. This document is quite enough for a small and medium company. You can download and find more useful resource at their website

2. Disaster Recovery Plan from IBM

IBM, a computer company provide free template for your DRP. Although the design of DRP is based on IBM iseries but most of the template could be used in any type of application. The objective of an IBM DRP is to ensure that you can respond to a disaster or other emergency that affects information systems and minimize the effect on the operation of the business.

For more information about Disaster Recovery Plan documents please visitSecurityprocedure.com. We provide free Disaster Recovery Plan template and others security template that useful for your company at no cost.

Article Source: http://EzineArticles.com/?expert=Anjar_Priandoyo

The Progress of Thin Clients in the iSeries Arena

When thin clients first appeared on the market over 10 years ago, IBM was a great proponent of this technology even though the first units had slow CPUs, limited memory, and Linux-based operating systems.
In the early days of thin clients, networks were still employing hubs instead of switches. Hubs share all network traffic with all the attached devices; therefore, this type of technology did not work well with the boot-server type of thin clients. In other words, when many thin clients were started at the same time, the network was unable to handle the traffic caused by downloading the OS to every thin client.

Windows CE

As time went on, new operating systems like Windows CE were developed,. Networks started to use switches, eliminating the need to share bandwidth between devices while being capable of full duplex traffic (sending and receiving at the same time). Small in size, Windows CE could fit on the Disk-On-Chip (DOC) mounted inside the thin client as well as provide a place to embed applications.
Windows CE, however, had its own challenges. In addition, the Windows server platforms (NT Terminal Server and Windows 2000 Server) and the early version of Citrix were delivering what people expected from a server platform for hosted applications in colors, compression and encryption.

Thin Clients Today

Over the last few years, the average CPU speed has increased to over 500 Mhz. With a faster CPU, running Linux or embedded Windows XP (XPe) has become much more feasible. These operating systems are better-suited to running local applications; in addition, many provide full-function browsers with Java Virtual Machine (JVM) capability, email, enhanced terminal emulation, and much more of the functionality that users are used to seeing on the PC.
Windows 2003 for thin clients now has high-color support, sound delivered to the desktop, data compression, and better printing support. Citrix has also updated its product to provide better security, better compression, bidirectional sound, published applications, and security enhancements such as smart card and biometric device support.
Consultants have been saying for a long time that thin clients are the future. Thin client technology has finally caught up with the vision. Most intelligent thin clients can work nearly like a PC, giving the user all of the desired flexibility and performance while providing the data processing department with its control and security benefits.
Today there are many thin client models to choose from with varying CPU speeds, memory capacities, storage capacities, and operating systems.

Customized Thin Clients

Customization of thin clients is gaining importance with both Linux and XPe models because customers have their own ideas of what is needed on a machine. Thin-client manufacturers that can respond quickly to customization requests are gathering momentum in what is becoming a huge market.

Sophisticated Printer Support

As a company requires printing of more complex types of data, such as barcodes, it is critical that the chosen thin-client technology can handle necessary printers and data streams. Barcodes require a more sophisticated printer session on the thin client.

Tablet Thin Clients

A new thin-client device that is proving to be beneficial to many companies is a wireless tablet. The tablet takes the place of a hand-held barcode scanner with small display
In a warehouse environment, since inventory control software normally comes from an iSeries, a fixed terminal from which data is entered and checked is usually placed somewhere in the warehouse. Some thin-client manufacturers now sell wireless thin-client tablet displays so that existing full-screen inventory programs can be used directly on the tablet anywhere in the facility, eliminating the need to modify the inventory program. The tablet can be carried by the user with a neck strap or mounted on a forklift or on the wall.

All-in-One Thin Clients

Another growing trend is the use of "all-in-one" thin clients. These thin clients have the thin-client technology contained inside an LCD flat-panel monitor. The benefit here is to minimize the amount of desk space used and remove the clutter of additional wiring. The apex of this is a wireless all-in-one unit with a touch screen.

Biometric Security Features

The ability to secure access to thin clients with a biometric device is beginning to appear in thin-client applications. A biometric device is a fingerprint reader that is either a direct-attached device or a device that connects via the mouse, keyboard, or monitor. When prompted, the user places his or her index finger on the reader and the device verifies identity by comparing the fingerprint with those already recorded. Once a user is verified by a biometric device, the thin client can start various programs based on who the user is and how the software is programmed. Biometric authentication saves time, provides an additional layer of security, and eliminates the need for users to remember their passwords.

A New World of Thin Clients

The demand for thin-client technology is increasing while smaller, faster, more portable, more secure, and more adaptable thin-client products keep appearing on the market. BOSaNOVA, Inc. is a leading developer of enterprise-class thin client and network appliance solutions for Linux, XP, and CE.Net.
For a longer version of this article go to http://www.bosanova.net/thinclientprogress.html

Martin Pladgeman is President of BOSaNOVA, Inc., a leading developer of enterprise-class thin clients and iSeries connectivity solutions. In addition to their thin client and connectivity suite, BOSaNOVA's Launcher/400 business intelligence solution provides ways for customers to improve productivity and reduce costs. Detailed information on BOSaNOVA Thin Clients and iSeries Connectivity Solutions can be found online at http://www.bosanova.net

Article Source: http://EzineArticles.com/?expert=Martin_Pladgeman

AS400 and iSeries Effective Screen Design

"Successful screen design is based on how well the developer knows both the user and the data."
- Bryce's Law

INTRODUCTION

Some time ago I was working with a hospital in the Midwest who
was trying to automate some patient admission forms. Hospital
forms are notoriously complicated and voluminous (thanks to the lawyers), and this hospital was no different. This made it difficult for the hospital to gather the necessary data about a patient, their physician, and their insurance carrier. As such, they wanted to automate the forms thereby simplifying the collection of data. Unfortunately, the resulting screen designs were essentially no different than the forms. They were very busy and complicated with little editing checks. Frankly, they were no better than the forms they were trying to replace and, because of this, use of the screens were spotty at best.

Designing a computer screen is essentially no different than designing a paper form. But since most of today's developers have little experience in forms design perhaps it is time to review some of the basic elements of good design. First, because a screen or form represents how a human being will interface with a system, we must consider the man/machine interface; its ergonomics. This means we must first understand the intended user, including his/her intelligence level and senses. Someone with a greater proficiency in using a computer will have less difficulty in using complicated screens than someone less conversant in computer technology. As to senses, there is little point in devising an elaborate color scheme if the user may be colorblind. Again, know thy intended user.

For more information on ergonomics, see No. 65 - "What Ever Happened to Ergonomics?" - March 6, 2006 http://www.phmainstreet.com/mba/ss060306.pdf

The objective, therefore, in good screen design (and forms design) is to make something that is easy to use (intuitive; requiring little interpretation and confusion) and effective for collecting data and displaying information. Although the following discussion can be applied to screens as used in some character based operating systems, it is primarily concerned with Graphical User Interfaces (GUI) as used in today's popular operating systems.

The GUI was originally introduced with Xerox's Star computer in the early 1980's. Following this, several companies emulated the Star, including Apple, Microsoft, IBM, and Sun. The GUI was extremely popular as it offered an ease of use never before thought possible. The only problem was that it lacked standards, whereby one GUI implemented program did not behave in the same manner as another GUI program. Fortunately, standards started to appear in the late 1980's with IBM's CUA standards (Common User Access) which provided a detailed list of design standards for developing a GUI based program. (NOTE: CUA was an important part of IBM's System Application Architecture standards - SAA). The benefit of CUA standardization was that users familiar with one GUI program could quickly be trained in how to use another GUI program, since they essentially behaved the same. Today, there are now different interpretations of the CUA standards as implemented by different computer vendors (Gee, what a surprise! ;-) Nonetheless, designing a GUI screen in accordance with accepted standards is preferred over developing of a screen without such standards.

DESIGN CONSIDERATIONS

Today there are some pretty slick tools to quickly build screens. Regardless of their capabilities, a developer should be cognizant of three basic design considerations: Layout, Data Entry, and Support:

A. Layout

The objective here is to make the screen "clean" and consistent. Too much detail makes the screen cluttered and abrasive to the end-user. When designing your screen, consider eye movement, eye strain and, where appropriate, add magnification. Here are some tips for consideration:

Alignment - there should be some simple symmetry to the screen. Disjointed alignment of fields, text, and images tends to alienate users. There should be a comfortable amount of spacing not only around the edge of the screen, but between sections of the screen. Because GUI windows can be resized (either maximum or to a height and width devised by the user), consider how the screen will look in either form. Borders are useful for defining sections on the screen, but be careful they do not become overbearing and distracting.

Zoning - this refers to the establishment of sections within the screen. This is useful if different types of users are going to be accessing the same screen, or if different sections serve distinctly separate purposes (thereby not confusing one with another). Borders and colors can be useful for distinguishing sections. In a GUI window, notebook tabs can be useful.

Flow - there should be an obvious flow to the screen that will naturally catch the user's eye and prompt him/her in the proper direction. Understand this, Western countries generally observe things from left-to-right and top-down; Eastern countries observe things top-down and from left-to-right; and Middle Eastern countries observe things from right-to-left and top-down. Also understand that the tab order of the keyboard provides direction for the user. As such, the tab order on a screen should go in a logical order and not jump around meaninglessly.

Type Fonts - use common fonts familiar to users. Fancy fonts may be impressive, but will they be supported on all of the computers where the screen will be accessed from? Commonly accepted fonts include Arial, Courier, Sans Serif, and Times Roman. Devise a standard font point size; 10 is generally
agreed to be readable by the average person, but then again, will your end-user be an average person? Also, devise a standard scheme for upper-case and lower-case lettering and type styles (e.g., bold, italic); such subtleties will naturally attract the eye.

Colors can be helpful for highlighting sections, accenting required field entries, or for general appearance. Although colors can be helpful, they can also be distracting if they become overbearing. Be sensitive to color contrasts so the user can adequately read the screen. Also be cognizant of end-users who are might be colorblind.

Headings - screen headings should be placed in a standard position for easy identification by the user. A formal name and, where appropriate, a screen number should be clearly visible to the user.

Keyboard/mouse relationship - if in the event a computer mouse either breaks down or is simply not available, the user should still be able to execute the screen using simple keyboard commands. CUA standards are particularly useful in this regard.

B. Data Entry

The proper entry of data is just as important as the physical layout of the screen. Regrettably, many designers take a superficial approach to data collection and, consequently, a lot of time is spent later on cleaning up data in the data base. Considerable time can be saved with a little effort here in screen

design. Your objective, therefore, is to produce a screen that will collect "clean" data (as opposed to "dirty" data that will have to be corrected later on).

Before embarking on screen design, the developer should be intimate with the data specifications. This can be obtained either from a good data dictionary/repository, or from the physical data base design. Basically, the developer is looking for the data element's:

- Length - the maximum number of characters which may be assigned to a data element.

- Class - the type of characters to be expressed; e.g, alphabetic, numeric, alphanumeric, signed numeric, etc.

- Justification - the alignment of data within a field when the number of characters is less than the length of the receiving field, e.g., left, right, around the decimal point.

- Fill Character - the character to be used to complete a field when the data item to be placed in the field is shorter than the maximum length, e.g., blank, zero, X, etc.

- Void Character - the character to be used when a data item's value is unknown or nonexistent, e.g., blank, zero, X, etc.

- Unit of Measure - the representation of numeric data, e.g., area, volume, weight, length, time, energy rate, money, etc.

- Precision - for numeric data, the number of significant digits in a number.

- Scale - for numeric data, the placement of the decimal point.

- Validation Rules - the specific values which the data element may assume, including default values. For example, Yes/No, specific codes or numbers to be used, editing rules, etc. This includes such things as the expression of dates:

20051211

December 11, 2005

12/11/2005

2005/12/11

11-DEC-05

- Generated data - quite often it is necessary to show computations based on primary values being inputted by the user. As such, it is necessary to know the data dependencies and the formulas for calculating the generated values.

- Program Label - although this will not be visible to the user inputting the data, the developer must understand how the data element is referenced in the data base.

NOW IS NOT THE TIME TO GUESS WHAT THE DATA DEFINITION IS; NOW IS THE TIME TO BE AS PRECISE AS POSSIBLE. Armed with this knowledge, the developer then determines the most suitable mechanisms for collecting the data; for GUI windows, this primarily includes such things as field entries, radio buttons, check boxes, selection lists, and text boxes. The objective here is to force the user to make correct entries as easily as possible. Some considerations:

- Mandate certain field entries be completed before allowing processing to continue. This can be done by: forcing the focus of the window to the field(s) requiring entry; attaching a "hot" color to required field entries (red) and; pop-up messages to prompt the user of problem entries.

- Automatically enter default values into field entries; this saves time for the user (as well as forcing proper entries). One good example of this is to have the user enter a Zip Code first, which should then automatically populate City and State entries.

- Check characters entered and automatically adjust accordingly. For example, automatically upshift or downshift characters - this is particularly useful when entering State Postal Codes (upshift), and entering e-mail addresses (downshift). Also, reject certain character entries and check formats.

- Make active use of selection lists, thereby forcing the user to select a choice from a prescribed list as opposed to typing an entry.

- Encrypt certain sensitive entries, such as credit card numbers and passwords.

- If your application is to allow Asian characters (e.g., Chinese, Japanese, or Korean), provide the ability to allow for the Double Byte Character Set (DBCS). For info, see:
http://publib.boulder.ibm.com/iseries/v5r2/ic2924/index.htm?info/dm/rbal3mst187.htm

- Accommodate the expression of local units of measure, such as dates, times, money, etc. This "personalizes" the screen for the user.

- Depending on the situation, provide or negate the use of the computer's clipboard for field entries.

- Where applicable, provide for data entry using voice/speech-type dictation.

Finally, format the collected data to suit the targeted physical data base.

By making data entry "foolproof" you will be saving a lot of time and effort for the end-user, the DBA, and yourself.

C. Support

To minimize user confusion, be sure to include sufficient Help text and messaging facilities into the screen. Too often I have seen screens with little support in this regards. Again, CUA standards should be observed whenever possible.

Help Text - should be provided for:

A. The screen overall - explaining its overall purpose, who should be using it, and how the data will be processed (its behavior). The Playscript language technique for writing procedures is particularly useful in this regards (see "References" below for details).

B. The various sections of the screen sections (if multiple sections).

C. Field entries - showing the name of the field entry, input specifications, along with some sample and suggested entries. If a generated value is displayed, explain how it is computed (from other field entries).

"Help" push buttons on the screen are helpful, but everything should be related to the F1 Help key, particularly field entries. Further, all screens should feature a Help action-bar-choice which includes an Index of subjects, and "About" (identifying
the name and version of the software in use).

Messages

Messages basically come in three forms: Informational (requiring no action), Warning (that a potential problem might exist), and Error (prohibiting processing). All messages should be clearly written and easy for the user to understand. For warning and error messages, do not simply report a problem to the user, but also advise him on what he should do about it. In other words, point him in the right direction and don't leave him hanging.

CONCLUSION

Good screen design requires a developer in tune with his intended audience and who can create a simple and effective solution that is easy for the user to execute, yet promotes the collection of "clean" data. The developer must strike a careful balance between what is graphically elegant and what is practical for the user to use.

One element of design that is alluded to in this discussion is the development of universal systems whereby screens can be translated into foreign languages. There are some simple tricks for doing this. Be sure to read:

No. 03 - "Creating Universal Systems" - Dec 20, 2004
http://www.phmainstreet.com/mba/ss041220.pdf

Above all else, the developer should observe all pertinent design standards when creating screens. As mentioned earlier, users will be more likely to accept and implement new programs if their design is similar to programs they are already familiar with. The need for standardization cannot be stressed enough. To this end, some companies even go so far to devise a library of standard screen templates for developers to use. This does two things; it helps enforce design standards, and; it expedites the development of the screen. But in the end, successful screen design is based on how well the developer knows both the user and the data.

REFERENCES

For vendor CUA (Common User Access) Standards, see:

IBM

http://www-306.ibm.com/ibm/easy/eou_ext.nsf/publish/558

Microsoft

http://msdn.microsoft.com/library/default.asp?URL=/library/books/winguide/fore.htm

Apple

http://developer.apple.com/documentation/index.html

Sun

http://docs.sun.com/app/docs/doc/802-6490

For a description of the "Playscript" procedure language, see:
No. 38 - "The Language of Systems" - Aug 22, 2005
http://www.phmainstreet.com/mba/ss050822.pdf

Tim Bryce is the Managing Director of M. Bryce & Associates (MBA) of Palm Harbor, Florida and has 30 years of experience in the field. He is available for training and consulting on an international basis. He can be contacted at: timb001@phmainstreet.com

Copyright © 2006 MBA. All rights reserved.

Article Source: http://EzineArticles.com/?expert=Tim_Bryce

Manage Your iSeries Data with the Power of Excel

There are many benefits to providing employees with easy access to key business data, but it's important to examine whether your procedures are making it easier to control your iSeries data access while minimizing the time spent exporting that data into useable reports.

Many solutions on the market provide effective ways to manage your company's iSeries data, but which one is right for your environment? More often than not, business intelligence (BI) and document management solutions require a lot of technical expertise and/or programming skills. Implementing and maintaining the software can be an overwhelming task. But many companies are finding that they can bring key iSeries data into a program that their users already know, providing them with timely access to information.

When it comes to easy report-generation, the first solution that comes to mind for many users is Excel. When this ubiquitous spreadsheet and reporting application is paired with software that can easily transfer iSeries data into spreadsheets, users can obtain the information they need and manipulate it with the powerful tools that Excel provides for analysis, communication, and results-sharing.

Many facets of your company's reporting procedures can generate cost savings. As an IT administrator, you may be creating numerous reports for employees throughout your organization: one report for your executive team, a different report for accounting, and others for administrative users. Often, these reports are created by copying and pasting data from hard-to-use file formats into Excel spreadsheets. By using a solution that easily and rapidly exports your data into pre-designed Excel templates, you can save aggravation and expense.

Other savings may occur if your company is currently using preprinted forms. By moving from preprinted forms to a simple solution that exports data into Office products, companies can save thousands of dollars each year by eliminating wasted, outdated forms. Major savings occur when your users can save their documents, print them on plain paper, and easily email them using Outlook.

It is IT's responsibility to provide tools that improve employee productivity. When data is held up because users have to depend on IT to create their reports and IT is busy with other significant business tasks, problems can arise—particularly when mission-critical data is needed immediately. Having both easy access to key information and the needed tools to analyze that data are necessary to an organization's growth. When users are able to create their own reports, they have access to the information they need when they need it.

With some reporting, BI, and document-management software programs, valuable time is spent learning and training new staff members to use the products. Software that can export iSeries data into Excel gives users the ability to manipulate the data in a program that they already know. Many BI software products on the market require extensive time and knowledge from the IT staff, which, again, takes IT away from critical projects.

The distribution method that you use for your internal documents can contribute to many headaches. The best solutions provide the ability to distribute your documents with applications that you already use. With programs that transfer data into an Office product, users can easily send the documents via email or fax, using the applications they know best.

A versatile solution provides the ability to transfer iSeries data into various document formats. A solution that offers the flexibility to create forms in Word or present information in a PDF allows users to tackle many tasks.

Once data is in a Microsoft Office document, the ways to customize that data are endless. Users can create formulas, charts, graphs, and tables to make documents visually appealing and to help them more efficiently analyze the data. Programs that allow you to set up an Excel template in advance—with all of the formatting already created—can be especially helpful because they can save the user a great deal of valuable time. iSeries data is imported quickly with one step, and the completed report is immediately available.

Ideally, your iSeries-to-Excel solution should allow you to set it up and forget about it. Look for a product that lets you easily set up jobs that automate your reporting process.

To eliminate strains on your network, some solutions offer the ability to save IT resources by working in batch overnight. Those solutions allow optimal performance of your iSeries during peak daytime periods.

As with any BI or reporting solution, your iSeries-to-Excel solution must limit access to sensitive data and easily control access to the reports that users can see.

Users with no authorities on the database can receive iSeries data easily in a spreadsheet format. This prevents you from having to provide authority to everyone who needs access to this data. You don't have to manage the access authorities because the data is sent to the user from the system itself.

The widespread knowledge of the capabilities of Excel make it easy to explain the benefits of moving iSeries data into spreadsheets. When evaluating BI and reporting solutions, find one that can perform all of the tasks your company requires in one complete solution while making it easy on your users.

For a longer version of this article go to http://www.bosanova.net/iseriesdatatoexcel.html

Martin Pladgeman is President of BOSaNOVA, Inc., a leading developer of enterprise-class thin clients and iSeries connectivity solutions. In addition to their thin client and connectivity suite, BOSaNOVA's Launcher/400 business intelligence solution provides ways for customers to improve productivity and reduce costs. Detailed information on BOSaNOVA Thin Clients and iSeries Connectivity Solutions can be found online at http://www.bosanova.net

Article Source: http://EzineArticles.com/?expert=Martin_Pladgeman

Not too long ago, a friend of mine was having a problem with a program that he had just finished. We were using the source debugger to find out what was causing his problem. We figured out the problem eventually, but in the course of our work, we came across some debugger commands and options that I was previously unaware of. Now I am not going to say that I know everything about the IBM source debugger but I have been using it for a good long while. I was surprised to discover something new, so I thought I'd share what I learned with you.

%INDEX Built-In Function

The %INDEX function is one of the functions that up until recently I was unaware of. It is really handy when you are manipulating multiple-occurrence data structures. It is even more useful when used in conjunction with the _QRNU_DSI_xxxx built-in value (where xxxx = the name of a multi-occurrence data structure). The command EVAL _QRNU_DSI_xxxx will return the current occurrence of a multiple-occurrence data structure. This is often a very useful thing to know during program execution and as far as I know this is the only way to get it. By using the %INDEX function you can also change the current occurrence of a multiple-occurrence data structure. See below.

d WorkMultDS1 ds occurs(30)

d StringA 10a

d StringB 25a

To find the current occurrence of the data structure without changing it:

Command: EVAL _QRNU_DSI_WorkMultDS1

Result: 1 (or whatever the current occurrence of WorkMultDS1 is)

To change the current occurrence:

Command: WorkMultDS1 = %INDEX(12)

Result: WorkMultDS1 = %INDEX(12) = 12

After issuing the above command interrogated subfields will reflect the values of the twelfth occurrence of the data structure.

%SUBSTR Built-In Function

This function is really convenient when you are working with large strings. Using the EVAL function by itself will only display only the first 500 characters of a field. As a side note, the easy solution to that problem is to append the type and length to the EVAL function as shown below:

This command will display the first 2000 characters of the variable Long_String_Name in character format:

Command: EVAL Long_String_Name:C 2000

This command will display the first 2000 characters of the variable Long_String_Name in hexadecimal format:

Command: EVAL Long_String_Name:X 2000

The hexadecimal display is important when displaying data that contains packed or binary decimal data. But we are not talking about the EVAL function. The SUBSTR function will do exactly what the name implies; it will display a substring or a portion of a string value. See the examples below.

Assume that StringFldA = 'Now is the time for all good men...'

Command: EVAL %SUBSTR(StringFldA 12 4)

Result: %SUBSTR(StringFldA 12 4) = 'time'

Not surprisingly, you can also use the %SUBSTR function to set the value of a specific portion of a string. Sometimes this is a far more helpful way to use this function. An example of this usage is shown below.

Assume that StringFldA = 'Now is the time for all good men...'

Command: EVAL %SUBSTR(StringFldA 12 4) = 'blah'

Result: %SUBSTR(StringFldA 12 4) = 'blah'

To see the complete string use the following command:

Command: EVAL StringFldA

Result: StringFldA = 'Now is the blah for all good men...'

The %SUBSTR function can also be used to set a conditional breakpoint or watch condition. As an example, the code shown below will stop the execution only when positions 12 thru 15 of StringFldA are 'blah':

Command: BREAK 100 when %SUBSTR(StringFldA 12 4) = 'blah'

Or you could watch for those same positions to change by using this watch condition:

Command: WATCH %SUBSTR(StringFldA 12 4)

After issuing the command above, anytime the contents of positions 12 thru 15 change, program execution stops and you are notified.

EQUATE function

More and more often I run into field names that are really long. Now don't get me wrong here, I think this is great. But I don't type so good so a 25 character field name has a much greater chance of being mis-keyed than an old 6 character one (even if the 25 character one is way moreunderstandable). So if you don't type well (or just don't like to type, like me), then use the EQUATE debug function. This function allows you to define an "alias" for a field name, expression, or command. Take the example below.

Command: EQUATE SmName This_is_a_really_long_field_name

You can then find the value of "This_is_a_really_long_field_name" by keying the command shown below:

Command: EVAL SmName

The EQUATE command can also be used to create a macro...sort of. This can be done by assigning an "alias" to a complete debug command.

Command: EQUATE SmCmd EVAL %substr(StringA,5,5)

Now by simply keying SmCmd and pressing Enter, you can display the value of positions 5 thru 9 of the StringA variable.

So the next time you start testing a program or chasing a bug, remember these, and you might save yourself some headaches.

Jeff Olen is the Chief Operating Officer of Olen Business Consulting, Inc. Olen Business Consulting is a leading provider of iSeries training and temporary staffing. Jeff has over 20 years experience developing with IBM systems, has held positions at several Fortune 500 companies and is an accomplished author.

Find out more about Olen Business Consulting and Jeff at http://www.olen-inc.com Or you can email Jeff directly at jmo@olen-inc.com.

Article Source: http://EzineArticles.com/?expert=Jeff_Olen

"Successful screen design is based on how well the developer knows both the user and the data."
- Bryce's Law

INTRODUCTION

Some time ago I was working with a hospital in the Midwest who
was trying to automate some patient admission forms. Hospital
forms are notoriously complicated and voluminous (thanks to the lawyers), and this hospital was no different. This made it difficult for the hospital to gather the necessary data about a patient, their physician, and their insurance carrier. As such, they wanted to automate the forms thereby simplifying the collection of data. Unfortunately, the resulting screen designs were essentially no different than the forms. They were very busy and complicated with little editing checks. Frankly, they were no better than the forms they were trying to replace and, because of this, use of the screens were spotty at best.

Designing a computer screen is essentially no different than designing a paper form. But since most of today's developers have little experience in forms design perhaps it is time to review some of the basic elements of good design. First, because a screen or form represents how a human being will interface with a system, we must consider the man/machine interface; its ergonomics. This means we must first understand the intended user, including his/her intelligence level and senses. Someone with a greater proficiency in using a computer will have less difficulty in using complicated screens than someone less conversant in computer technology. As to senses, there is little point in devising an elaborate color scheme if the user may be colorblind. Again, know thy intended user.

For more information on ergonomics, see No. 65 - "What Ever Happened to Ergonomics?" - March 6, 2006 http://www.phmainstreet.com/mba/ss060306.pdf

The objective, therefore, in good screen design (and forms design) is to make something that is easy to use (intuitive; requiring little interpretation and confusion) and effective for collecting data and displaying information. Although the following discussion can be applied to screens as used in some character based operating systems, it is primarily concerned with Graphical User Interfaces (GUI) as used in today's popular operating systems.

The GUI was originally introduced with Xerox's Star computer in the early 1980's. Following this, several companies emulated the Star, including Apple, Microsoft, IBM, and Sun. The GUI was extremely popular as it offered an ease of use never before thought possible. The only problem was that it lacked standards, whereby one GUI implemented program did not behave in the same manner as another GUI program. Fortunately, standards started to appear in the late 1980's with IBM's CUA standards (Common User Access) which provided a detailed list of design standards for developing a GUI based program. (NOTE: CUA was an important part of IBM's System Application Architecture standards - SAA). The benefit of CUA standardization was that users familiar with one GUI program could quickly be trained in how to use another GUI program, since they essentially behaved the same. Today, there are now different interpretations of the CUA standards as implemented by different computer vendors (Gee, what a surprise! ;-) Nonetheless, designing a GUI screen in accordance with accepted standards is preferred over developing of a screen without such standards.

DESIGN CONSIDERATIONS

Today there are some pretty slick tools to quickly build screens. Regardless of their capabilities, a developer should be cognizant of three basic design considerations: Layout, Data Entry, and Support:

A. Layout

The objective here is to make the screen "clean" and consistent. Too much detail makes the screen cluttered and abrasive to the end-user. When designing your screen, consider eye movement, eye strain and, where appropriate, add magnification. Here are some tips for consideration:

Alignment - there should be some simple symmetry to the screen. Disjointed alignment of fields, text, and images tends to alienate users. There should be a comfortable amount of spacing not only around the edge of the screen, but between sections of the screen. Because GUI windows can be resized (either maximum or to a height and width devised by the user), consider how the screen will look in either form. Borders are useful for defining sections on the screen, but be careful they do not become overbearing and distracting.

Zoning - this refers to the establishment of sections within the screen. This is useful if different types of users are going to be accessing the same screen, or if different sections serve distinctly separate purposes (thereby not confusing one with another). Borders and colors can be useful for distinguishing sections. In a GUI window, notebook tabs can be useful.

Flow - there should be an obvious flow to the screen that will naturally catch the user's eye and prompt him/her in the proper direction. Understand this, Western countries generally observe things from left-to-right and top-down; Eastern countries observe things top-down and from left-to-right; and Middle Eastern countries observe things from right-to-left and top-down. Also understand that the tab order of the keyboard provides direction for the user. As such, the tab order on a screen should go in a logical order and not jump around meaninglessly.

Type Fonts - use common fonts familiar to users. Fancy fonts may be impressive, but will they be supported on all of the computers where the screen will be accessed from? Commonly accepted fonts include Arial, Courier, Sans Serif, and Times Roman. Devise a standard font point size; 10 is generally
agreed to be readable by the average person, but then again, will your end-user be an average person? Also, devise a standard scheme for upper-case and lower-case lettering and type styles (e.g., bold, italic); such subtleties will naturally attract the eye.

Colors can be helpful for highlighting sections, accenting required field entries, or for general appearance. Although colors can be helpful, they can also be distracting if they become overbearing. Be sensitive to color contrasts so the user can adequately read the screen. Also be cognizant of end-users who are might be colorblind.

Headings - screen headings should be placed in a standard position for easy identification by the user. A formal name and, where appropriate, a screen number should be clearly visible to the user.

Keyboard/mouse relationship - if in the event a computer mouse either breaks down or is simply not available, the user should still be able to execute the screen using simple keyboard commands. CUA standards are particularly useful in this regard.

B. Data Entry

The proper entry of data is just as important as the physical layout of the screen. Regrettably, many designers take a superficial approach to data collection and, consequently, a lot of time is spent later on cleaning up data in the data base. Considerable time can be saved with a little effort here in screen

design. Your objective, therefore, is to produce a screen that will collect "clean" data (as opposed to "dirty" data that will have to be corrected later on).

Before embarking on screen design, the developer should be intimate with the data specifications. This can be obtained either from a good data dictionary/repository, or from the physical data base design. Basically, the developer is looking for the data element's:

- Length - the maximum number of characters which may be assigned to a data element.

- Class - the type of characters to be expressed; e.g, alphabetic, numeric, alphanumeric, signed numeric, etc.

- Justification - the alignment of data within a field when the number of characters is less than the length of the receiving field, e.g., left, right, around the decimal point.

- Fill Character - the character to be used to complete a field when the data item to be placed in the field is shorter than the maximum length, e.g., blank, zero, X, etc.

- Void Character - the character to be used when a data item's value is unknown or nonexistent, e.g., blank, zero, X, etc.

- Unit of Measure - the representation of numeric data, e.g., area, volume, weight, length, time, energy rate, money, etc.

- Precision - for numeric data, the number of significant digits in a number.

- Scale - for numeric data, the placement of the decimal point.

- Validation Rules - the specific values which the data element may assume, including default values. For example, Yes/No, specific codes or numbers to be used, editing rules, etc. This includes such things as the expression of dates:

20051211

December 11, 2005

12/11/2005

2005/12/11

11-DEC-05

- Generated data - quite often it is necessary to show computations based on primary values being inputted by the user. As such, it is necessary to know the data dependencies and the formulas for calculating the generated values.

- Program Label - although this will not be visible to the user inputting the data, the developer must understand how the data element is referenced in the data base.

NOW IS NOT THE TIME TO GUESS WHAT THE DATA DEFINITION IS; NOW IS THE TIME TO BE AS PRECISE AS POSSIBLE. Armed with this knowledge, the developer then determines the most suitable mechanisms for collecting the data; for GUI windows, this primarily includes such things as field entries, radio buttons, check boxes, selection lists, and text boxes. The objective here is to force the user to make correct entries as easily as possible. Some considerations:

- Mandate certain field entries be completed before allowing processing to continue. This can be done by: forcing the focus of the window to the field(s) requiring entry; attaching a "hot" color to required field entries (red) and; pop-up messages to prompt the user of problem entries.

- Automatically enter default values into field entries; this saves time for the user (as well as forcing proper entries). One good example of this is to have the user enter a Zip Code first, which should then automatically populate City and State entries.

- Check characters entered and automatically adjust accordingly. For example, automatically upshift or downshift characters - this is particularly useful when entering State Postal Codes (upshift), and entering e-mail addresses (downshift). Also, reject certain character entries and check formats.

- Make active use of selection lists, thereby forcing the user to select a choice from a prescribed list as opposed to typing an entry.

- Encrypt certain sensitive entries, such as credit card numbers and passwords.

- If your application is to allow Asian characters (e.g., Chinese, Japanese, or Korean), provide the ability to allow for the Double Byte Character Set (DBCS). For info, see:
http://publib.boulder.ibm.com/iseries/v5r2/ic2924/index.htm?info/dm/rbal3mst187.htm

- Accommodate the expression of local units of measure, such as dates, times, money, etc. This "personalizes" the screen for the user.

- Depending on the situation, provide or negate the use of the computer's clipboard for field entries.

- Where applicable, provide for data entry using voice/speech-type dictation.

Finally, format the collected data to suit the targeted physical data base.

By making data entry "foolproof" you will be saving a lot of time and effort for the end-user, the DBA, and yourself.

C. Support

To minimize user confusion, be sure to include sufficient Help text and messaging facilities into the screen. Too often I have seen screens with little support in this regards. Again, CUA standards should be observed whenever possible.

Help Text - should be provided for:

A. The screen overall - explaining its overall purpose, who should be using it, and how the data will be processed (its behavior). The Playscript language technique for writing procedures is particularly useful in this regards (see "References" below for details).

B. The various sections of the screen sections (if multiple sections).

C. Field entries - showing the name of the field entry, input specifications, along with some sample and suggested entries. If a generated value is displayed, explain how it is computed (from other field entries).

"Help" push buttons on the screen are helpful, but everything should be related to the F1 Help key, particularly field entries. Further, all screens should feature a Help action-bar-choice which includes an Index of subjects, and "About" (identifying
the name and version of the software in use).

Messages

Messages basically come in three forms: Informational (requiring no action), Warning (that a potential problem might exist), and Error (prohibiting processing). All messages should be clearly written and easy for the user to understand. For warning and error messages, do not simply report a problem to the user, but also advise him on what he should do about it. In other words, point him in the right direction and don't leave him hanging.

CONCLUSION

Good screen design requires a developer in tune with his intended audience and who can create a simple and effective solution that is easy for the user to execute, yet promotes the collection of "clean" data. The developer must strike a careful balance between what is graphically elegant and what is practical for the user to use.

One element of design that is alluded to in this discussion is the development of universal systems whereby screens can be translated into foreign languages. There are some simple tricks for doing this. Be sure to read:

No. 03 - "Creating Universal Systems" - Dec 20, 2004
http://www.phmainstreet.com/mba/ss041220.pdf

Above all else, the developer should observe all pertinent design standards when creating screens. As mentioned earlier, users will be more likely to accept and implement new programs if their design is similar to programs they are already familiar with. The need for standardization cannot be stressed enough. To this end, some companies even go so far to devise a library of standard screen templates for developers to use. This does two things; it helps enforce design standards, and; it expedites the development of the screen. But in the end, successful screen design is based on how well the developer knows both the user and the data.

REFERENCES

For vendor CUA (Common User Access) Standards, see:

IBM

http://www-306.ibm.com/ibm/easy/eou_ext.nsf/publish/558

Microsoft

http://msdn.microsoft.com/library/default.asp?URL=/library/books/winguide/fore.htm

Apple

http://developer.apple.com/documentation/index.html

Sun

http://docs.sun.com/app/docs/doc/802-6490

For a description of the "Playscript" procedure language, see:
No. 38 - "The Language of Systems" - Aug 22, 2005
http://www.phmainstreet.com/mba/ss050822.pdf

Tim Bryce is the Managing Director of M. Bryce & Associates (MBA) of Palm Harbor, Florida and has 30 years of experience in the field. He is available for training and consulting on an international basis. He can be contacted at: timb001@phmainstreet.com

Copyright © 2006 MBA. All rights reserved.

Article Source: http://EzineArticles.com/?expert=Tim_Bryce