Go Back  Xtreme Visual Basic Talk > Legacy Visual Basic (VB 4/5/6) > Knowledge Base > Tutors' Corner > Zen and the art of VB programming


Reply
 
Thread Tools Display Modes
  #1  
Old 03-29-2005, 12:57 PM
loquin's Avatar
loquin loquin is offline
Google Hound

Retired Moderator
* Guru *
 
Join Date: Nov 2001
Location: Arizona, USA
Posts: 12,396
Default Zen and the art of VB programming


This thread is an attempt to pass on information and methodologies to programmers. Even though these topics are not VB specific, they can be valuable for any programmer to understand, and to practice. (I would also like to offer my sincere appologies to Robert Pirsig for the title of this tutorial... )

To find out more about a topic, just select from the list below, or scroll down the thread to find it.
Note: This Thread was split off the Standards & Practices Tutorial. The initial idea behind Standards and Practices was to create a "How To" thread, with some "Why To" information. However, the posts that are now located here are much more along the lines of "Why To," rather than "How To." So, a new thread seems more appropriate for these (more general-purpose) tutorials.
__________________
Lou
"I have my standards. They may be low, but I have them!" ~ Bette Middler
"It's a book about a Spanish guy called Manual. You should read it." ~ Dilbert
"To understand recursion, you must first understand recursion." ~ unknown

Last edited by loquin; 05-08-2007 at 04:10 PM.
Reply With Quote
  #2  
Old 03-29-2005, 12:59 PM
loquin's Avatar
loquin loquin is offline
Google Hound

Retired Moderator
* Guru *
 
Join Date: Nov 2001
Location: Arizona, USA
Posts: 12,396
Default Data Design

When I was growing up, one of the biggest thrills in my life was, when after passing my driving test, Dad tossed me the car keys, and said ďGo take the car for a spin son. Just be back by dinner.Ē

Of course, he knew that I wouldnít have time to get too far, since dinner was in a half-hour, but still, getting the keys was a thrill! Now, one of the things you will often hear about, at least when discussing databases, are primary keys, and foreign keys. They donít refer to how to win an election, or where the key was made. They do give an indication as to how well your database is planned, though.

ďWhy would you want to plan a database anyway? That takes a lot of time to do right, and I just donít have the time to waste on that relational stuff!Ē Many people DO feel that time spent on the careful design of a database, or on a program, for that matter, is time wasted. They feel that they can accomplish more by diving right in and building things. OK. The next time you take a trip, and you drive over a bridge, would you rather cross one thatís had enough time spent on the design? Or maybe you just want to use the one that was thrown together without the use of blueprintsÖhmmm. While there are cases, primarily with extremely simple, or one-run applications, where you can ignore good project practices and build from the get-go, time spent on the design of a database, or on a program, almost always takes less time overall.

A good database design can allow you to enter and update data easily, and will allow you to easily query, summarize, and create reports. In addition, it will be easy for you to modify the design later, and the database will be easy to document and to maintain. Since a database is a model of the process you're automating, designing a database requires you to make many decisions, including which tables to create, which fields the tables should contain, what relationships should be created, and what rules to create. These decisions will need to be made whether you follow good design practices or not; but the relational model will help guide you in the decision-making process.

However Ė the relational model cannot help you know the process you are trying to model. Before you begin modelling the business or process, make SURE you understand it! This will take careful study, and probably a lot of meetings with the people who know the process well.

The relational model was created in 1969 by Dr. E.F. Codd. This model is based on the mathematic disciplines of set theory and predicate logic. While you donít need to know this math to apply the relational model, itís good to know that sound mathematical principles form the basis for it.

In the relational model, Tables represent groups of things (entities) in the real world. These things can represent either objects, or events. For instance, tables might be created to represent objects such as Employees, Inventory, or Locations. You might also build a table to represent events like telephone calls, or customer orders, or work assignments.

You should never have duplicate records in a table. If you do, then you canít uniquely identify or distinguish between the duplicate records programmatically. This can create all sorts of problems in trying to maintain the records. One way in which you can guarantee that all the records in a table are unique is to designate (or create) a primary key. The primary key is a field, or group of fields that you will use to uniquely identify a record in your table. For instance, the EmployeeID field might be the primary key for an employee table.

Candidate Key(s) is the term used to identify the potential keys in a table that could be used as a primary key. Keys can be simple (made up of one field,) or composite, meaning that they are made up of multiple fields. In deciding which candidate field(s) should be used for the primary key, you should choose a key that is Minimal (keys made up of fewer fields are desirable,) Stable (keys whose values are less likely to change are better,) and Simple and Familiar (because they are easier to remember.)

If there arenít any acceptable candidate keys, either you are leaving out some important field, or you may need to have your system generate one for you. Such an automatic field is known as a surrogate key. In Access, an Auto Number field, in SQL Server, an Identity field, and in Oracle or PostgreSQL, a Sequence, can all make good surrogate keys.

In a relational database, itís important to be able to establish relationships between related tables. For instance, in a sales system, suppose you have a table of customers, called tblCustomers, defined with the fields CustomerID (the Primary Key), FirstName, LastName, Address, City, State, AddressCode, and PhoneNum. In addition, youíve created a table with order details, called tblOrders, which includes fields such as OrderNum, OrderDate and OrderValue. As it is, however, you have no way of relating the two tables together. You Could just copy all the information about the customer from the customer table into new fields in the orders table. And, that would work. Just not very well. First, this means that every time Joe Smith places an order, you get to create a new copy of his name and address. Not only does that take up a lot of space, but what do you do when customer Joe calls up and changes his address? Do you have to go back through your records and update all his orders? Instead, a better approach would be to just store a reference to the customer record. You would modify the table tblOrders by adding just the CustomerID field. This key field, which references a different table, is called a Foreign Key. Your Orders table now has TWO keys Ė the Primary Key (OrderID) and the Foreign Key (CustomerID.)

This type of relationship between the two tables is known as a one-to-many relationship. The one-to-many relationship is the most common relationship in a database application. In our example, the customer (one) can have none, one, or many orders.

The second type of relationship which may need to be addressed in your application is the one-to-one relationship. Tables are said to have a one-to-one relationship if a record in the first table can be referenced by at most, one record in the second table. While very few true one-to-one relationships can be defined in the real world, a one-to-one relationship is quite often defined artificially, typically for security reasons, or to overcome limitations in the database used. In many cases, for instance, you may wish to keep sensitive client information, such as name, address, and social security number, separate from other information. For example, in a hospital database, the client medical data, such as birth date, sex, and allergy information, should be maintained separately from the address and billing information. Different hospital users would be assigned different access levels into the database. The nurses and doctors, for instance, could see the patientís allergy data, while users in the billing department would not be cleared to review this information. Even though more advanced relational database systems like SQL Server and Oracle allow security restrictions to individual fields within tables, one-to-one relationships are often utilized as well.

The third type of relationship that you may need is the many-to-many relationship. In this relationship, zero to many record from the first table may be related to zero to many records in the second table. An example of this might be a track meet. In a track meet, there are many different events, and one event will have many participants. Conversely, a given participant may be participating in many events. Although it is not possible, in a relational database system, to directly define a many-to-many relationship, this relationship is easily created by building an intersection table, also known as a linking table. An intersection table holds foreign keys to the two many tables of the relationship. The primary key of the intersection table is a compound key, consisting of both of the foreign keys. In our track meet example, the event table, tblEvent, could have fields named EventID (PK), EventName, and EventDescription. The Participant table, tblParticipant, would hold fields ParticipantID, FirstName, LastName, Sex, Age, etc. The intersection table would have, at a minimum, the foreign key fields EventID and ParticipantID. Since the primary key for this table is EventID and ParticipantID, there can only be one unique combination of event and participant, which accurately models the real-world process of a track meet. Intersection tables are often used to model object-events, such as appointments, which pair people and times. In fact, in addition to the people/time intersection table, a working appointment database would also employ an intersection table joining time to location, and a third intersection table joining people to location.

Even though applying primary and foreign keys in your appointment book database just doesnít have quite the same thrill as taking your Dadís í68 Mustang out on your first solo, you can get a lot of satisfaction in getting the relationships set up properly, so that you provide a well-designed database for your project.

FYI: There's a very nice CBT (Computer Based Training) module (flash) on Referential Integrity at the PostgreSQL site. The information provided is general enough to be useful on any database which truly supports R.I. Recommended.
__________________
Lou
"I have my standards. They may be low, but I have them!" ~ Bette Middler
"It's a book about a Spanish guy called Manual. You should read it." ~ Dilbert
"To understand recursion, you must first understand recursion." ~ unknown

Last edited by webbone; 01-09-2009 at 05:38 PM.
Reply With Quote
  #3  
Old 03-29-2005, 01:01 PM
loquin's Avatar
loquin loquin is offline
Google Hound

Retired Moderator
* Guru *
 
Join Date: Nov 2001
Location: Arizona, USA
Posts: 12,396
Default Just how normal ARE you ?

“Normal is as Normal does, Forest.”

That’s what Mama always said. Of course she was always talking ‘bout chocolates, too.

One of the things you will often hear, at least when discussing databases, is normalization. Just what in the heck is normalization, anyway?

Normalization refers to the level to which redundant data has been eliminated from a relational database. Generally, the higher the level of normalization, the more redundant data has been eliminated. Changes in normalization are achieved by changing the design of the relational model of your database.

As we’ve discussed in earlier installments, time spent planning is time well spent. In the case of data normalization, time spent in ensuring that your data is normalized can have great benefit when storing, modifying, or retrieving data in your database, as well as reducing time and costs to implement changes in the database design later.

First Normal Form (1NF)

One definition of First Normal Form (1NF) states “All column values must be atomic.” This means that any one field in the table can only hold one value. Suppose you had an order table where the ITEMS field held “2 Hammer, 3 Chisel, 1 Saw”. This table is NOT in 1NF, because the ITEMS field is not atomic – it does not hold just one value. If you ever tried to build a report to summarize the parts and quantities sold, you would have a hard time with this table structure. First normal form also means that you can’t have repeating groups (or arrays) of fields in a single table. For instance, suppose you tried to fix your earlier order table by dropping the ITEMS field and replacing it with ITEM1, DESCR1, ITEM2, DESCR2, and ITEM3, DESCR3 fields. Since the first customer to order 4 items could break this table, the first problem with this design is obvious. Even though you could keep adding fields up to the maximum probable order size, you would still be artificially limited as to the maximum number of items in an order, you would waste a lot of space, and it would still be difficult to summarize your sales records.

In order to move a table to 1NF, you will need to break up the multi-value fields and remove the repeating groups. This is done by taking the information that was stored horizontally, and instead, store it vertically. In the above case, you would have a single ITEM and DESCRIPTION field, and add an OrderItem field. Our first order above, which consisted of three items, would then have a total of three records to store the data about these three items.

Second Normal Form (2NF)

A table is said to be in Second Normal Form (2NF) if “it is in 1NF and every non-key column is fully dependent on the entire primary key.” This means that tables should only be storing information pertaining to the one entity that is described by the primary key.

As an example, suppose we’ve expanded our tblOrders table a bit, so that the fields are now OrderID, OrderItem, CustomerID, OrderDate, Quantity, ProductID, and ProductDescription. Will this table definition meet 2NF rules? Well, the answer to this question falls back to the definition of 2NF: Is every other field in the table fully dependent on the primary key (OrderID and OrderItem?) Fully dependent means that you can only determine the value of a field if you know the value of the primary key. In this case, the answer is NO. The CustomerID and Order date are dependent on the OrderID only, and not on the combination of OrderID and OrderItem. It is very easy to spot this problem by looking at the sorted data from the table. Note that OrderID and CustomerID are always the same for a given order, no matter how many items are in the order.

This situation is not a good one, since the user is forced to enter lots of redundant information; The customer number and the order date are stored with every order detail record, resulting in higher data storage requirements (and thus expense) and in additional opportunities for erroneous entry. We can resolve this issue simply, by breaking the table into two tables; tblOrder (OrderID, CustomerID, and OrderDate,) and tblOrderDetail (OrderID,OrderItem, Quantity, ProductID, and ProductDescription.)

The table tblOrder describes the entire order; it’s primary key is OrderID. The table tblOrderDetail describes only the order items; it’s primary key is a composite of OrderID and OrderItem. Note: The act of breaking a table down into their normalized forms is called decomposition. No information was lost when a table is normalized in this fashion, as we can always reconstruct the original table by running a query against the two tables created by normalization, or even better, by creating a view if using a database server.

Third Normal Form (3NF)

The definition of a table said to be in Third Normal Form (3NF) is that “it is in 2NF and if all non-key columns are mutually independent.” This means that all non-key fields must not have inter-field dependencies. They must be fully dependent upon just the primary key, and not on each other.

If we look at the tblOrderDetail we created in the 2NF discussion, it appears to have an interdependency between ProductID and ProductDescription. In other words, wherever you have a particular value of ProductID, you would see an identical value of ProductDescription. This table design is not an optimum one for several reasons. First, you are storing information about two “things” (order detail information, as well as product information) in one table. Every time a user wishes to refer to a screwdriver, for instance, he/she would have to enter both its code and its description. This is both inefficient, and error prone. Second, if you decided that you needed to change the description of a product, you would need to change it everywhere it is used in the entire orders table. The third, and potentially, the most dangerous problem though, is if you were to delete all detail records that included a particular ProductID and Description. In this case, you would lose all reference to this part, as it is no longer stored in the database.

The Third Normal Form also forbids the creation of tables that contain calculated fields. The reason for this is that they waste space, and they can easily get out of sync with the source data. Normally, it’s better to generate calculated values only when you need them – within queries and reports, for instance.

In order to move from 2NF to 3NF, just break out another table. In the tblOrderDetail case, add a table called tblProduct which contains a primary key of ProductID, and which also stores specific information to the product itself (Description, Units of measure, minimum stocking quantities, etc.) Then, in table tblOrderDetail, just store a foreign key reference to the tblProduct table.

Codd only presented three normal forms, but since, others have shown that further normalizations levels can exist. In complicated databases, these higher levels of normalization may need to be considered, but they will not be addressed in this tutorial.

Note that, while normalization is generally a good thing, there are times that you may need to denormalize your data – that is, to not follow a strict adherence to normal forms. However, when doing so, you should always start from a strictly normalized database, and then denormalize in order to make your database work better in the real world. The usual reason for denormalizing a database is to improve performance. As an example, suppose that you are developing a crossword puzzle dictionary app. In a crossword, you generally need to know the length of the word you are searching for. Even though SQL offers a length function to calculate the length of a word in a query, if you need to perform this calculation hundreds of thousands of times for each and every search, it would probably be best to intentionally denormalize your database by adding a calculated field (WordLength.)

There are tradeoffs in denormalization, though. When you store a calculated field in your table you must take the responsibility to ensure that the data remains up-to-date. At any point that users may be able to add or edit table data that impacts the calculated field, you must ensure that the calculated data is correct. Otherwise, if the users makes a change, not only is your data denormalized, it is also WRONG. And don’t assume that denormalization will always result in a performance improvement. Even though the performance may be improved in one area (typically a select query,) it will certainly be reduced whenever you update the data.

As a general rule, normalize your database whenever you can, and denormalize only when you must. If you decide that you must denormalize, follow these rules:
  • Start with a normalized structure.
  • Denormalize deliberately, and not by accident.
  • Have a good reason for denormalizing.
  • Be fully aware of the tradeoffs that are involved.
  • Document the deviation. Thoroughly.
  • Make sure to create necessary application adjustments to ensure data integrity.
If you do become involved in the modeling of a complex, real-world application, database design can become messy. In this case, you may wish to consider using tools, like Asymetrix’s InfoModeler, or Logicworks’ Erwin, or for simpler applications, MS Visio. In order to use them effectively, you should already understand the relational model.

Now, 'bout those chocolates...

Note: Ref Paul Litwin's Fundamentals of Relational Database Design - I did. Also, Wikipedia has a nice article dealing with normalization.
__________________
Lou
"I have my standards. They may be low, but I have them!" ~ Bette Middler
"It's a book about a Spanish guy called Manual. You should read it." ~ Dilbert
"To understand recursion, you must first understand recursion." ~ unknown

Last edited by loquin; 04-25-2007 at 02:27 PM.
Reply With Quote
  #4  
Old 03-29-2005, 01:02 PM
loquin's Avatar
loquin loquin is offline
Google Hound

Retired Moderator
* Guru *
 
Join Date: Nov 2001
Location: Arizona, USA
Posts: 12,396
Default Alphabet Soup

When I was a kid, I loved making up words with my lunch - you know, arranging the letters in my alphabet soup to send messages to my brother. And, I still like Scrabble.

Still, all the specialized vocabulary that you see in the software field can be daunting, especially since new ones are being made up every day, it seems. There's one that's been around for years, though, that you should really be aware of: SDLC.

So, just what in the heck IS SDLC, anyway? SDLC stands for System(or Software) Development Life Cycle. SDLC is a design methodology that is widely used in the industry. If you want to get ahead, it's well worth your while to really understand it, and to actually apply it. I mean that. Take the time to study it. It can REALLY be your friend, especially when dealing with clients. The problem is, you can't start it in the middle of a project - you need to define the parameters of your project up front. By this, I don't mean that you have to design the project up front, but you DO need to define the stages of your project, and the deliverables at each stage of the project, in the very beginning of it.

The basic concept is of SDLC is this: You define and document, up front, what your application will do, and follow through completely through implementation and test. The first step of SDLC is to define, at a high level, your application. The output of this step is typically known as a Requirements Document. Not only should the requirements document detail just what the expectations are for the application, it should also include the Application Domain (the part of the real world that your app will interface to.) The Application Domain can include the users of the app and their roles, the departments affected within a company, the network, database, or other physical resources required, etc. Another thing that the Requirements Document can (and often should) contain is unanswered questions. It's highly unlikely that you'll be able to answer all the questions about the application at this time, but you should document what you need to find out to complete it. In order to produce the Requirements Document, you'll need to spend time to understand the BUSINESS requirements for the application. Interview the anticipated key users of the system. Talk to the managers to know that what you propose will meet the business objectives of the company. If you can't do this, your "solution" will have the proverbial snowball's chance in Hades of being approved.

Once you've defined the requirements of the app, and gotten client approval, you can then begin the Functional Specification document. This document will have little to no software design, but is the basis for all further system design. In it, you define HOW the application will work. What the users will input (but not how they input it.) What the outputs will be, but not specifically how the outputs will look. What data will be stored, but not HOW the data will be stored. Where/when data will flow from point to point, but not how the data actually flows.

Then, once approval is gained from the client on the functional spec, you can then move into the actual design phases. Produce a database design, application software design, report designs, input screen designs. These design documents will be based on the functional specification. Finally, after you get buyoff on the software design, you can move forward on the software building, testing, and maintenance phases.

You may say to yourself - "What the H&*($ are you doing, going through all these steps before you ever start coding! I can't afford to spend that sort of time!" Well, given the above realities, I would counter with "You can't afford NOT to spend this time in the design and approval process."

Think about it... By going through with these steps, you can limit your exposure to loss. You won't have spent a lot of time designing and coding that will have to be scrapped, re-designed, and re-coded. Because the client will have had the opportunity to review, comment on, and approve each step, they will have a sense of ownership as well, and you will have bullets in your gun when a design change request comes along. "Yes, Mr. Customer - that CAN be done. However, I've already designed and built the software to meet our original specification, so it will cost us X weeks of development time to re-write the code, and the schedule will take a hit." The client can then make the decision to pay for a re-write now, or to wait until the project is done and implement it as a separate, follow-up project.

By spending the time up front establishing the concepts of SDLC, you can also tie your payment into the development as well. i.e., you receive 10% of your quoted price on delivery of a requirements document, 20% on delivery of the Functional Spec, 20% on design completion, 30% on installation completion, and the final 20% on Client acceptance.

Approaching a project this way also has a soothing effect on the client. First, it shows that you are taking a professional approach, and have a definite plan for their project. They are ALSO limiting their exposure to risk. If they wish, they could end the project after the functional spec is released. Then, they could go out for bids with a functional spec to document their requirements, if they so desire. In fact, it's often a good approach on your part to only commit an initial quote to produce a functional spec, because you often won't know the exact parameters of the project until you take the time to produce a functional spec anyway. Then, with a firm understanding of the application requirements, you can bid a project. (And, by actually producing the functional spec, you've got a leg up on the competitors, as you KNOW exactly what needs to be done.) In addition, since you're familiar with the people and the project, you have another leg up on the competitors, which you can turn to your advantage.

Finally, time you spend on the conceptual design of the application is well spent, as you will end up with many fewer of those expensive re-writes, and redesigns later. And, because you've taken the time, in the beginning, to understand the business processes, I contend that the software you produce will be of higher quality than one thrown together without the up-front formalized conceptual design. If you'd like to read more about analysis and design, the Hollis book mentioned in the books thread in the tutors corner has a lot of information about SDLC (even though Hollis doesn't use the term at all.)

So, soup, anyone?


Edit by loquin: Hollis' book allows freely copying and distribution of the templates used within the book, so, here they are, in the ZIP attachment.)
Attached Files
File Type: zip Requirements-funct spec templates.zip (19.8 KB, 39 views)
__________________
Lou
"I have my standards. They may be low, but I have them!" ~ Bette Middler
"It's a book about a Spanish guy called Manual. You should read it." ~ Dilbert
"To understand recursion, you must first understand recursion." ~ unknown

Last edited by loquin; 04-21-2008 at 01:54 PM.
Reply With Quote
  #5  
Old 05-08-2007, 04:01 PM
loquin's Avatar
loquin loquin is offline
Google Hound

Retired Moderator
* Guru *
 
Join Date: Nov 2001
Location: Arizona, USA
Posts: 12,396
Default Accessing the Data

Buzzwords and acronyms. Data Access in Visual Basic is simply drowing in them. DAO. ADO. Data Providers. OleDB. ODBC. JET. ADODC.... The list goes on and on. Why can't we just make this Simple!!! Shoot, when I was growing up, a buzzword was what you got when you said something while playing a kazoo.

Well, lets give simplification a whirl. The first thing to remember is that Microsoft has been TRYING to make it simple for VB for a long, long time. Since even before VB was released. But, data access often isn't as simple as it sounds. Sure, the underlying concept can be fairly straight-forward. Add a new record. Edit a record. Delete a record. View a record. In and of itself, any of these basic database operations isn't that difficult. However, when dealing with databases, remember that a database usually doesn't exist in isolation.

Often, more than one user (client) can be using the database at one time. What do you do when two users are trying to edit the same table. What about when they're trying to edit the same record? These complex issues, as well as many others, will force applications programmers to utilize database access libraries.

Keep in mind - once upon a time, the situation was WAY more complex than today. Every database had its own means of performing the basic operations! Which meant that if I wrote an application that interfaced to a dBase database, it was a real chore to make that app work with Paradox or BTrieve. In order to try to standardize the confusion, Microsoft introduced a unified interface to several databases called ODBC. ODBC is an acronym for Open Data Base Connectivity. It is essentially an API which allows windows programs a standard interface to access a database. In order for the single API to be able to talk to multiple databases, each database type required a custom interface to be written. This interface, or ODBC driver is called by ODBC to read and write to the database. So, many database vendors each wrote a driver for their particular database, which adhered to the ODBC standards defined by Microsoft. And 'All was well in the database world.'

Yeah. Righhhttt. I have a bridge to sell, too...

There was one litle issue. Even though writing database applications WAS easier when you have a common interface, ODBC was designed around ISAM (Indexed, Sequential Access Method) databases like dBase, FoxPro, and, MS Access, with its JET (Joint Engine Technology) database engine.

The ODBC model was designed to provide efficient access to ISAM databases and Access was designed as an ISAM database to take advantage of this. However, as database servers like Oracle and Sybase came onto the scene, the ODBC model just didn't 'fit' as well. In addition, in order to use ODBC, you HAD to do so by using its API, in the form of a lot of low-level C programming. It just was not a very easy process to master.

Microsoft had also released a Rapid Application Development tool called Visual Basic. I'm sure you've heard of it. As more and more developers realized its potential, Microsoft released a library to allow VB to interface to databases. DAO (Data Access Objects) were closely modeled on ODBC, and allowed VB Programmers to take advantage of the ODBC interface, without using low-level code.

However, the shorcomings of the ODBC model as it related to database server technology still existed. Yes, it was possible to interface to database servers, but, the interface wasn't necessarily very efficient. In order to close this gap, and allow efficient use of database servers, which tend to be much more scaleable than ISAM databases, Microsoft released a new (shortlived) object model library called RDO, (Remote Database Objects. The RDO object model more closely aligned itself to server-based database systems. This was accomplished by using Microsoft's Component Object Model (COM) via the OleDB (Object Linking and Embedding) Database API.

However, Microsoft soon realized that other potential data sources, like legacy database systems, directory structures, log files, and the like would also benefit from being able to access them using a standard data interface, so they released ADO, or ActiveX Data Objects, to provide a standardized interface to ISAM databases, Server-based database, non-traditional data sources, and legacy systems alike. Microsoft often referred to ADO as a means for 'Universal Data Access.' ADO, available since VB5, allows the application programmer to use the same interface library to access a wide-ranging set of potential data sources.

OleDB supports these various data sources with OleDB Data Providers, which are equivalent to ODBC's drivers. They provide the data specific interface, while ADO provides the common access functionality.

Now that you've have a Microsoft history lesson please refer the attached diagram. It is a schematic which outlines many of the ways that you can access a database from within VB. Note that the diagram is intentionally incomplete. It does not, for instance, contain any reference to RDO. Also, please be aware that the data sources in the database layer show a great deal of overlap that cannot reflected in a simple diagram. Some ISAM data sources have many relational features, while some relational databases store their data internally in an ISAM format. And, some of the the 'Non-Relational' data sources may exhibit some relational traits.
Attached Images
File Type: jpg ADO Object Model.JPG (43.8 KB, 102 views)
Attached Files
File Type: pdf ADO Object Model.pdf (12.1 KB, 95 views)
__________________
Lou
"I have my standards. They may be low, but I have them!" ~ Bette Middler
"It's a book about a Spanish guy called Manual. You should read it." ~ Dilbert
"To understand recursion, you must first understand recursion." ~ unknown

Last edited by loquin; 06-21-2007 at 09:49 AM. Reason: clarification
Reply With Quote
Reply


Currently Active Users Viewing This Thread: 1 (0 members and 1 guests)
 
Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump

Advertisement:





Free Publications
The ASP.NET 2.0 Anthology
101 Essential Tips, Tricks & Hacks - Free 156 Page Preview. Learn the most practical features and best approaches for ASP.NET.
subscribe
Programmers Heaven C# School Book -Free 338 Page eBook
The Programmers Heaven C# School book covers the .NET framework and the C# language.
subscribe
Build Your Own ASP.NET 3.5 Web Site Using C# & VB, 3rd Edition - Free 219 Page Preview!
This comprehensive step-by-step guide will help get your database-driven ASP.NET web site up and running in no time..
subscribe
 
 
-->