Tuesday, November 10, 2009

The secret to successful virtualization

Server virtualization is a great way for your IT organization to battle today’s economic challenges. You can reduce costs through server consolidation—which in turn increases ROI as you run multiple workloads on a single server. Plus, the ability to deploy new applications—and scale them up or down—boosts business agility.

But be mindful of pitfalls that can undermine these advantages. As companies large and small have learned the hard way, you can negate the cost-saving benefits of server virtualization by choosing an inefficient SAN storage solution that does not properly support the advanced requirements of virtualized environments.

What to watch out for

While you ride out this recession, you need to make the most of your virtualization investments. It literally pays to be aware of the right storage choices. Here are some things to keep in mind:
  • External shared storage can be very inefficient. For example, a volume allocated to hold 100 GB of data may only be required to hold 50 GB—and that remaining capacity may not be used for years. This problem becomes more evident when temporary volumes supporting test and development environments still consume 100 GB (although the volumes will never be filled and will only be needed for days or, at most, weeks during testing). Such inefficiency means more upfront storage purchases and suddenly escalating costs.
  • More virtual machines equate to more logical volumes—and more snapshots for backups. Traditional SANs require a 100 percent reserve, so the same half-full 100 GB volume now requires an additional 100 GB for each snapshot. Again, more costs and more wasted space./li>
  • More often than not, implementing high availability across sites requires an additional storage system—plus synchronous replication software to enable continuous data availability in the event of a single SAN failure. This results in more complexity and increased storage expenses.
  • Disaster recovery using remote replication boosts storage costs yet again. In fact, storage replication solutions often require multiple copies of data and massive capacity reservation that can increase storage inefficiency by a factor of three. First, a snapshot must be created on the local system, requiring a 100 percent reserve. Then the remote copy requires a 100 percent reserve on the remote system. Finally, a copy must be created before a server at the remote location can mount and use it. It all adds up.

 Dealing with budget realities

The deployment model for traditional SANs is to purchase a storage system with the capability for future expansion. But with IT budgets being curtailed today, how can you budget for future requirements and expansion? If you are a small or midsized business, you face more cost deterrents. Even in good times, your budget dictates the purchase of a lower-cost SAN solution—which in turn often lacks the features required to support your high availability and disaster recovery requirements, not to mention desired enterprise-class management features. Sure, you can add on software, but that’s an expensive option as well.

Why choose iSCSI SANs

An attractive alternative to traditional SANs, iSCSI SAN solutions offer you storage for virtualized environments that hits all the potential pain points around cost and management. Go one step further—to iSCSI SANs with a scale-out architecture, like the HP LeftHand P4000 SAN, and you gain additional advantages in terms of scalability, high availability, disaster recovery and performance. Storage clustering creates a scalable storage pool by aggregating the critical components of a number of storage systems into a single pool of resources. This pool accepts and responds to iSCSI requests as a single system, as all physical capacity is aggregated and available to the volumes created on the SAN. Need more storage capacity and performance? Simply and seamlessly add additional storage nodes to the pool—without downtime.
By providing built-in thin provisioning, HP LeftHand P4000 SAN solutions dramatically improve overall storage efficiency. Thin provisioning allocates space only as data is actually written to a volume. You can purchase only the storage you need now, and then add to your storage pool later as your storage requirements increase. This increases storage utilization and ROI while helping you defer capital expenditures—always a smart move in these budget-tight times.
What’s more, HP LeftHand P4000 SAN solutions include built-in support for your high availability and disaster recovery implementations, thus eliminating the need to buy add-on software. You also get superior, scalable performance plus straightforward management that any server administrator can understand and put into practice right away.

Delivering on the promise of virtualization

Without a doubt, virtualization opens up a new range of possibilities for your data center: Improved server utilization with high availability and disaster recovery. Performance management through dynamic workload balancing. And simplified management with all servers harnessed as a single, uniform pool of resources. Don’t lose what you gain with server virtualization to inefficient storage. Make sure you choose SAN solutions that fully support your virtualized environment. That way, you can have all the benefits of server virtualization and:
  • SAN storage that drives down costs by increasing storage utilization
  • Flexibility to purchase storage as you need it, making it easier to work within today’s budget constraints
  • Increased capacity and performance as you add to your storage pool, avoiding traditional SAN bottlenecks
  • Peace of mind that comes from knowing you can continue to operate continuously in the event of a local or geographic failure
  • Ease of management so that server and storage administrators alike can manage your iSCSI SAN 
REFERENCES

The article is published by http://www.hp.com

A Refreshing Move

Encrypt, compress, mask, and deliver information with Oracle Data Pump.

John the DBA at Acme Bank had some special visitors: a few development team leads and the head of IT security. The development teams regularly refresh certain tables from other databases, such as data warehouse tables refreshed from online transaction processing systems and various development database tables refreshed from their production counterparts. The refresh process involves exporting the data by using Oracle Data Pump to generate a dumpfile on the source server; FTPing the dumpfile to the target server; and, finally, importing the dumpfile to the target database. Sometimes a single dumpfile generated at the production database is used to refresh other production and development databases.

Because of certain new security requirements, such as those mandated by the Payment Card Industry (PCI), the security team objected to moving dumpfiles containing sensitive and personally identifiable information (PII), such as Social Security numbers and phone numbers, without encryption. The members of the development team had explored the possibility of using OS-level encryption utilities, but that would have resulted in a two-step process: first creating the dumpfile and then encrypting it. The two-step process also required additional storage, which they did not have, and that was another reason they were asking John the DBA for help.

With the growth of data in the production databases, the generated dumpfiles had also been growing and were starting to put some strain on storage and bandwidth. The development team had considered using OS-level utilities to compress the dumpfiles, but that option also involved a two-step process: first creating the dumpfile and then compressing it. This two-step process also required additional intermediate storage, which development did not have, and the idea of new, multistep processes and intermediate storage for both encrypted and compressed dumpfiles was not making the members of the development team very happy.

In addition, the security team mandated that the PII data in the development database be replaced with some random information or a generic character such as X to make it unidentifiable—a process known as masking. The development team proposed to issue a massive update on the development database after the data was refreshed from an encrypted dumpfile, but the DBAs objected to the idea, because the massive update would have caused severe performance issues on the development database. The idea wasn’t palatable to the head of security either—she wanted the PII data masked in the dumpfile itself before shipping the dumpfile to the development server, so that the development DBAs would not be able to see the PII data.

The development team leads were scratching their heads over the different requirements, and they asked John for help. He assured them that the Oracle Data Pump tool in Oracle Database 11g had all the features to address their different needs—encryption of dumpfiles, compression, and masking of PII data—in a single-step process.

Encryption

Oracle Database 10g Release 2 introduced Oracle Transparent Data Encryption, which enabled columns to be encrypted without any additional programming. The encryption is controlled through a wallet. (For information on setting up wallets for Oracle Transparent Data Encryption, see “Transparent Data Encryption” in the September/October 2005 issue of Oracle Magazine at otn.oracle.com/oramag/oracle/05-sep/o55security.html.)

John pointed out that the same encryption mechanism is available in Oracle Database 11g for encrypting the Oracle Data Pump dumpfiles. He demonstrated the dumpfile encryption, first making sure that the wallet was open and then executing the following command to export the CUSTOMERS table (found in the SH sample schema supplied with Oracle Database) and create the customers_norm.dmp file without encryption:
# expdp sh/sh dumpfile=customers_norm.dmp directory=tmp_dir tables=customers

Then he used the following command to export the CUSTOMERS table and create the customers_enc.dmp file with encryption:
# expdp sh/sh dumpfile=customers_enc.dmp directory=tmp_dir tables=customers encryption=all

To demonstrate that the customers_enc.dmp dumpfile was encrypted, John first searched for a specific customer name—Beatrice—in the unencrypted dumpfile:
 
# grep Beatrice customers_norm.dmp

The command returned
 
Binary file customers_norm.dmp matches 

which meant that the name Beatrice was found in the file. Then John searched for the same name in the encrypted dumpfile:
 
# grep Beatrice customers_enc.dmp

The command returned no output, confirming that the name Beatrice did not exist in the file. The name Beatrice did exist in the database table, however, so it was included in customers_norm.dmp (unencrypted). The value was encrypted in customers_enc.dmp when the ENCRYPTION parameter was used during export, so searching for the cleartext value yielded no result.

The ENCRYPTION parameter in the expdp command enabled encryption, John explained. The parameter value he supplied was ALL, but someone asked what other values were there. The ENCRYPTION parameter, he explained, accepts the following values:
  • NONE: No encryption will be performed.
  • ENCRYPTED_COLUMNS_ONLY: If some columns are encrypted by Oracle Transparent Data Encryption in the database, the values of those columns only are encrypted in the dumpfile.
  • DATA_ONLY: All data (not just data under Oracle Transparent Data Encryption) is encrypted, but metadata is not.
  • METADATA_ONLY: The metadata is encrypted, but the actual data is not.
  • ALL: Both the data and the metadata are encrypted.
Import

The import process automatically determines that the dumpfile is encrypted and uses the wallet to get the encryption keys and decrypt the dumpfile during the import. If the import is done into a different database, the wallets in both the source and target database must be identical.

John used the following command to demonstrate the import of the file:
 
# impdp sh/sh dumpfile=customers_enc directory=tmp_dir

Someone on John’s team remembered that the encryption feature was also available for Oracle Data Pump in Oracle Database 10g. She asked John what the difference was between Oracle Data Pump in Oracle Database 10g and in Oracle Database 11g. John explained that Oracle Data Pump in the earlier version allowed encryption of columns under Oracle Transparent Data Encryption only. Therefore, if Oracle Transparent Data Encryption wasn’t used at all, none of the dumpfile contents would be encrypted. In Oracle Database 11g, he explained, Oracle Data Pump can use dumpfile encryption even when none of the columns are under Oracle Transparent Data Encryption. Moreover, Oracle Data Pump in Oracle Database 11g can encrypt the whole dumpfile, not just a few Oracle Transparent Data Encryption-encrypted columns.

Inside the database, database security measures such as Oracle Transparent Data Encryption may encrypt critical data, but once the data leaves the database in a dumpfile, it loses that database security protection. John’s demonstration proved that complete data encryption was possible, making the security team very happy. The encryption happening in the same step as the creation of the dumpfile and not requiring that Oracle Transparent Data Encryption be enabled on individual columns in the database made the development team very happy.

Compression

Next John addressed the need to compress the generated dumpfile. Setting the COMPRESSION parameter in the exp command to ALL compresses both the data and the metadata.

John executed the following command to export the CUSTOMERS table with compression:
 
# expdp sh/sh dumpfile=customers_comp directory=tmp_dir tables=customers compression=all

The command produced the customers_comp.dmp file, and John compared the size of this compressed dumpfile with the uncompressed customers_norm.dmp file produced earlier. To expand the comparison, he compressed the customers_norm.dmp file with the regular gzip command:
 
# gzip -9 customers_norm.dmp

The command created a compressed file named customers_norm.dmp.gz. John then compared the sizes of the normal and different compressed files:

Type of File
Size
customers_norm.dmp
31.58MB
customers_comp.dmp
2.98MB
customers_norm.dmp.gz
3.67MB

From John’s demonstration, everyone understood that Oracle Data Pump compression reduced the original file to almost the same size as the gzip compression (or even smaller, as shown in John’s test). What’s more, the compression occurred when the dumpfile was generated, eliminating a two-step process and consequently saving valuable storage. While importing, John used the same impdp command he’d used earlier, without any special parameters. The Oracle Data Pump utility automatically recognized the dumpfile as compressed, uncompressed it inline, and imported it.

John pointed out that, by default, Oracle Data Pump compresses metadata while exporting. If that compression is not needed, the COMPRESSION=NONE setting disables metadata compression during export.

Masking

Finally, John addressed the issue of PII data, such as phone numbers, stored in the database going to the dumpfile for use in the development database. The security department wanted the values randomized instead of sent as is. To demonstrate the data pump randomizing solution, John first created a package, datapump_masking_pkg, that produced different patterns from the input data, as shown in Listing 1. This simple package includes several functions, each of which accepts one parameter, transforms it as directed by the corresponding code, and returns the transformed value.

Code Listing 1: Package for masking data
 
create or replace package datapump_masking_pkg
as
   function random_phone_same_area (p_in varchar2) return varchar2;
   function random_area_same_phone (p_in varchar2) return varchar2;
   function random_all (p_in varchar2) return varchar2;
   function mask_all (p_in varchar2) return varchar2;
end;
/
create or replace package body datapump_masking_pkg
as
   function random_phone_same_area (p_in varchar2) return varchar2 is
   begin
      return 
         substr(p_in,1,4)||round(dbms_random.value (100,999))||'-'||
         lpad(round(dbms_random.value (1,9999)),4,'0');
   end;
   function random_area_same_phone (p_in varchar2) return varchar2 is
   begin
      return round(dbms_random.value (100,999))||substr(p_in,4,12);
   end;
   function random_all (p_in varchar2) return varchar2 is
   begin
      return round(dbms_random.value (100,999))||'-'||
         round(dbms_random.value (100,999))||'-'||
         lpad(round(dbms_random.value (1,9999)),4,'0');
   end;
   function mask_all (p_in varchar2)
   return varchar2 is
   begin
      return 'XXX-XXX-XXXX';
   end;
end;
/

In the datapump_masking_pkg package, John included several functions that use the DBMS_RANDOM package to produce different patterns. The first function—random_phone_same_area—takes a phone number and keeps the area code intact but randomizes the 7-digit phone number. For instance, 123-456-7890 becomes 123 followed by a random string of 7 digits, for example, 895-5429. The random_area_same_phone function does just the opposite—it keeps the 7-digit number but randomizes the area code. The random_all function randomizes all 10 digits, so it produces a number that resembles a real phone number but has nothing in common with the original value. Finally, the mask_all function does not use DBMS_RANDOM and is very different—it replaces all digits with X, so, for example, 123-456-7890 becomes XXX-XXX-XXXX.

In this case, the security team wanted to randomize the area code and keep the same phone number. John addressed this by using the REMAP_DATA parameter in the expdp command to call the datapump_masking_pkg package. The REMAP_DATA parameter value has the format .: .. John called the random_area_same_phone function in the package:
 
# expdp sh/sh dumpfile=customers_remap directory=tmp_dir tables=customers
remap_data=customers.cust_main_phone_number:datapump_masking_pkg.random_area_same_phone

The dumpfile generated by this command does not have the actual data from the cust_main_phone_number column but, instead, the values supplied by the random_area_same_phone function. Only that column will be modified (or masked) in the dumpfile. All other columns will be exported as is.

After importing the customers_remap.dmp dumpfile to the target (in this case, development) database, John examined the value in one record:
 
SQL> select cust_main_phone_number
  2    from customers
  3    where cust_id = 1
  4    /
 
CUST_MAIN_PHONE_NUMBER
----------------------------------------------------------------
256-379-8954

In the source (production) database, John checked the value for the same record:

CUST_MAIN_PHONE_NUMBER
----------------------------------------------------------------
127-379-8954

Note that the area code changed from 127 in the source database to a random number, 256, in the target database. The rest of the phone number remained the same, in accordance with the security requirement. Had the requirement been something else, such as to randomize the phone number, John could have used one of the other functions in the package.

What if we need to mask additional columns, someone asked. John explained that the solution was to use another REMAP_DATA parameter in the expdp command. For example, to mask an additional column, named customer_phone_2, with a different function—random_phone_same_area—John executed the following:
 
# expdp sh/sh dumpfile=customers_norm.dmp directory=tmp_dir tables=customers
remap_data=customers.cust_main_phone_number:datapump_masking_pkg.random_area_same_phone
remap_data=customers.customer_number_2:datapump_masking_pkg.random_phone_same_area

What if we already have a dumpfile with the original (unmasked) values we want to import into a development database but don’t want to perform a massive “masking” update after the import, someone mused. The head of security first reminded everyone that they couldn’t have unmasked PII information available in dumpfiles headed for the development database. John acknowledged the security requirement and offered a hypothetical answer: even if a dumpfile is initially exported without the REMAP_DATA parameter, that parameter can be applied during import, as shown in this command:
 
# impdp sh/sh dumpfile=customers_norm directory=tmp_dir tables=customers
remap_data=customers.cust_main_phone_number:datapump_masking_pkg.mask_all

Here the import transformed (masked) the data before inserting it, eliminating the need for a postimport update. All the columns were imported as is, except the cust_main_phone_number column, which was transformed while being imported.

Someone then asked if all of these Oracle Data Pump tasks—encryption, compression, and masking—can be used together in one dumpfile export. Of course, assured John. To demonstrate, he used the following command:
 
# expdp sh/sh dumpfile=customers_norm directory=tmp_dir tables=customers
remap_data=customers.cust_main_phone_number:datapump_masking_pkg.mask_all
encryption=all
compression=all

This command produced an encrypted and compressed dumpfile and masked the cust_main_phone_number column (in the dumpfile), all in one step.

Conclusion

John restated the original requirements—encryption, compression, and masking—and explained how the new features of Oracle Data Pump in Oracle Database 11g could be used to address them, as shown in Table 1. The advantage of Oracle Data Pump encryption and compression features over operating-system-level encryption and compression tools is twofold. First, it performs the actions inline—that is, it decompresses and/or decrypts (as appropriate) while importing from the compressed or encrypted dumpfile. Second, while importing parts of the dumpfile, such as a single table from a full database export, the decompression or decryption occurs on that portion alone, not on the entire dumpfile.


Requirement
Solution
Dumpfiles need to be encrypted and decrypted inline, securely, and without human intervention.
Set encryption=all during export; no special settings are required during import.
Dumpfiles need to be compressed and decompressed inline, with no separate step required at compression or decompression.
Set compression=all during export; import automatically decompresses dumpfiles inline.



Table 1: Analysis of dumpfile requirements and solutions




Over the next few days, Acme was able to encrypt dumpfiles automatically, without using Oracle Transparent Data Encryption in the database on the PII columns; substantially compress the dumpfiles; and eliminate the presence of PII data in the dumpfiles themselves. The security team was happy that mandated requirements had been met, and the development team was happy, because the process for moving data was faster, required fewer steps, and consumed less storage and I/O.
 




Encryption by Password
Oracle Data Pump also includes an ENCRYPTION_PASSWORD parameter, which enables users to specify a password while exporting. During import, the same password must be used to decrypt the dumpfile. To enable this feature, you have to set another parameter—ENCRYPTION_MODE—to either PASSWORD or DUAL. Here is an example of a data pump export using the ENCRYPTION_PASSWORD and ENCRYPTION_MODE parameters:
 
# expdp sh/sh dumpfile=customers_enc.dmp directory=tmp_dir tables=customers encryption=all encryption_mode=password encryption_password=TopSecret

The same parameter/value combination—encryption_password=TopSecret—needs to be used during import of this customers_enc.dmp file. Although this option does not require that Oracle Transparent Data Encryption be set up in the database and consequently eliminates the need to copy the wallet, it does require that passwords be provided either by command line, in a script, or manually—all options that are considered less secure. With the wallet, the need to enter passwords is eliminated, providing a truly secure environment.


REFERENCES


The article was public in Oracle Magazine issue July/August 2009 by Arup Nanda

Bind variables - The key to application performance


Overview 

If you've been developing applications on Oracle for a while, you've no doubt come across the concept of «Bind Variables». Bind variables are one of those Oracle concepts that experts frequently cite as being key to application performance, but it's often not all that easy to pin down exactly what they are and how you need to alter your programming style to use them.
To understand bind variables, consider an application that generates thousands of
SELECT statements against a table; for example:

SELECT fname, lname, pcode FROM cust WHERE id = 674;
SELECT fname, lname, pcode FROM cust WHERE id = 234;
SELECT fname, lname, pcode FROM cust WHERE id = 332; 


Each time the query is submitted, Oracle first checks in the shared pool to see whether this statement has been submitted before. If it has, the execution plan that this statement previously used is retrieved, and the SQL is executed. If the statement cannot be found in the shared pool, Oracle has to go through the process of parsing the statement, working out the various execution paths and coming up with an optimal access plan before it can be executed. This process is know as a «hard parse» and for OLTP applications can actually take longer to carry out that the DML instruction itself.

When looking for a matching statement in the shared pool, only statements that exactly match the text of the statements are considered; so, if every SQL statement you submit is unique (in that the predicate changes each time, from id = 674 to id=234 and so on) then you'll never get a match, and every statement you submit will need to be hard parsed. Hard parsing is very CPU intensive, and involves obtaining latches on key shared memory areas, which whilst it might not affect a single program running against a small set of data, can bring a multi-user system to it's knees if hundreds of copies of the program are trying to hard parse statements at the same time. The extra bonus with this problem is that contention caused by hard parsing is pretty much immune to measures such as increasing available memory, numbers of processors and so on, as hard parsing statements is one thing Oracle can't do concurrently with many other operations, and it's a problem that often only comes to light when trying to scale up a development system from a single user working on subset of records to many hundreds of users working on a full data set.

The way to get Oracle to reuse the execution plans for these statements is to use bind variables. Bind variables are «substituion» variables that are used in place of literals (such as 674, 234, 332) and that have the effect of sending exactly the same SQL to Oracle every time the query is executed. For example, in our application, we would just submit

SELECT fname, lname, pcode FROM cust WHERE id = :cust_no; 

and this time we would be able to reuse the execution plan every time, reducing the latch activity in the SGA, and therefore the total CPU activity, which has the effect of allowing our application to scale up to many users on a large dataset.
Bind Variables in SQL*Plus 

In SQL*Plus you can use bind variables as follows:

SQL> variable deptno number
SQL> exec :deptno := 10
SQL> select * from emp where deptno = :deptno; 


What we've done to the SELECT statement now is take the literal value out of it, and replace it with a placeholder (our bind variable), with SQL*Plus passing the value of the bind variable to Oracle when the statement is processed. This bit is fairly straighforward (you declare a bind variable in SQL*Plus, then reference the bind variable in the SELECT statement)
Bind Variables in PL/SQL 

Taking PL/SQL first of all, the good news is that PL/SQL itself takes care of most of the issues to do with bind variables, to the point where most code that you write already uses bind variables without you knowing. Take, for example, the following bit of PL/SQL:
create or replace procedure dsal(p_empno in number)
as
  begin
    update emp
    set sal=sal*2
    where empno = p_empno;
    commit;
  end;
/


Now you might be thinking that you've got to replace the p_empno with a bind variable. However, the good news is that every reference to a PL/SQL variable is in fact a bind variable.
Dynamic SQL 

In fact, the only time you need to consciously decide to use bind variables when working with PL/SQL is when using Dynamic SQL.

Dynamic SQL, allows you to execute a string containing SQL using the EXECUTE IMMEDIATE command. For next example would always require a hard parse when it is submitted:
create or replace procedure dsal(p_empno in number)
as
  begin
    execute immediate
     'update emp set sal = sal*2 where empno = '||p_empno;
  commit;
  end;
/

The way to use bind variables instead is to change the EXECUTE IMMEDIATE command as follows:
create or replace procedure dsal(p_empno in number)
as
  begin
    execute immediate
     'update emp set
     sal = sal*2 where empno = :x' using p_empno;
  commit;
  end;
/


And that's all there is to it. One thing to bear in mind, though, is that you can't substitute actual object names (tables, views, columns etc) with bind variables - you can only subsitute literals. If the object name is generated at runtime, you'll still need to string concatenate these parts, and the SQL will only match with those already in the shared pool when the same object name comes up. However, whenever you're using dynamic SQL to build up the predicate part of a statement, use bind variables instead and you'll reduce dramatically the amount of latch contention going on.
The Performance Killer 

Just to give you a tiny idea of how huge of a difference this can make performance wise, you only need to run a very small test:

Here is the Performance Killer ....

SQL> alter system flush shared_pool;
SQL> set serveroutput on;

declare
      type rc is ref cursor;
      l_rc rc;
      l_dummy all_objects.object_name%type;
      l_start number default dbms_utility.get_time;
  begin
      for i in 1 .. 1000
      loop
          open l_rc for
          'select object_name
             from all_objects
            where object_id = ' || i;
          fetch l_rc into l_dummy;
          close l_rc;
          -- dbms_output.put_line(l_dummy);
      end loop;
      dbms_output.put_line
       (round((dbms_utility.get_time-l_start)/100, 2) ||
        ' Seconds...' );
  end;
/
101.71 Seconds...

... and here is the Performance Winner:
declare
      type rc is ref cursor;
      l_rc rc;
      l_dummy all_objects.object_name%type;
      l_start number default dbms_utility.get_time;
  begin
      for i in 1 .. 1000
      loop
          open l_rc for
          'select object_name
             from all_objects
            where object_id = :x'
          using i;
          fetch l_rc into l_dummy;
          close l_rc;
          -- dbms_output.put_line(l_dummy);
      end loop;
      dbms_output.put_line
       (round((dbms_utility.get_time-l_start)/100, 2) ||
        ' Seconds...' );
end;
/
1.9 Seconds...


That is pretty dramatic.  The fact is that not only does this execute much faster (we spent more time PARSING our queries then actually EXECUTING them!) it will let more users use your system simultaneously.
Bind Variables in VB, Java and other applications 

The next question is though, what about VB, Java and other applications that fire SQL queries against an Oracle database. How do these use bind variables? Do you have to in fact split your SQL into two statements, one to set the bind variable, and one for the statement itself?

In fact, the answer to this is actually quite simple. When you put together an SQL statement using Java, or VB, or whatever, you usually use an API for accessing the database; ADO in the case of VB, JDBC in the case of Java. All of these APIs have built-in support for bind variables, and it's just a case of using this support rather than just concatenating a string yourself and submitting it to the database. 

For example, Java has PreparedStatement, which allows the use of bind variables, and Statement, which uses the string concatenation approach. If you use the method that supports bind variables, the API itself passes the bind variable value to Oracle at runtime, and you just submit your SQL statement as normal. There's no need to separately pass the bind variable value to Oracle, and actually no additional work on your part. Support for bind variables isn't just limited to Oracle - it's common to other RDBMS platforms such as Microsoft SQL Server, so there's no excuse for not using them just because they might be an Oracle-only feature.
Conclusion 

Lastly, it's worth bearing in mind that there are some instances where bind variables are probably not appropriate, usually where instead of your query being executed many times a second (as with OLTP systems) your query in fact actually takes several seconds, or minutes, or hours to execute - a situation you get in decision support and data warehousing. In this instance, the time taken to hard parse your query is only a small proportion of the total query execution time, and the benefit of avoiding a hard parse is probably outweighed by the reduction in important information you're making available to the query optimizer - by substituting the actual predicate with a bind variable, you're removing the ability for the optimiser to compare your value with the data distribution in the column, which might make it opt for a full table scan or an index when this isn't appropriate. Oracle 9i helps deal with this using a feature known as bind variable peeking, which allows Oracle to look at the value behind a bind variable to help choose the best execution plan.

Another potential drawback with bind variables and data warehousing queries is that the use of bind variables disallows the potential for star transformations, taking away this powerful option for efficiently joining fact and dimension tables in a star schema.
REFERENCES

http://www.orafaq.net