Subscribe to Dr. Macro's XML Rants

NOTE TO TOOL OWNERS: In this blog I will occasionally make statements about products that you will take exception to. My intent is to always be factual and accurate. If I have made a statement that you consider to be incorrect or innaccurate, please bring it to my attention and, once I have verified my error, I will post the appropriate correction.

And before you get too exercised, please read the post, date 9 Feb 2006, titled "All Tools Suck".

Thursday, July 27, 2006

XCMTDMW: Import is Everything, Part 2

At the end of part one we had successfully imported our system of two documents, our publication source document, doc_01.xml, and the XSD schema document that governs it, book.xsd. We created the dependency relationship between doc_01.xml version 1 and book.xsd and we captured as much object metadata as we could given what little we knew about the data at hand. This created a repository with the following state:
/repository/resources/RES0001  - name: "doc_01.xml"; initial version: VER0001
/repository/resources/RES0002 - name: "book.xsd"; initial version: VER0002
/versions/VER0001 - name: "doc_01.xml"; Resource: RES0001
dependency: DEP0001
namespaces: http://www.example.com/namespaces/book
root element type: "book"
mime type: application/xml
xml version: 1.1
encoding: UTF-8
/VER0002 - name: "book.xsd"; Resource: RES0002
root element type: "http://www.w3.org/2001/XMLSchema:schema"
namespaces: http://www.w3.org/2001/XMLSchema
target namespaces:
http://www.example.com/namespaces/book
mime type: application/xml
xml version: 1.0
encoding: UTF-16
/dependencies/DEP0001 - Target: RES0002; policy: "latest"
Dependency type: "governed by"

We saw that it was the importer that needed to have all the XML awareness.

Now we need to see what happens when we do something with our data. There are two interesting use cases at this point:

1. Creation of a new version of doc_01.xml

2. Creation of a new document governed by the same schema

For use case 1 lets say that by some mechanism, and it doesn't matter what, we end up with a new document outside the repository called doc_01.xml that is different in its data content from the doc_01.xml we imported as VER0001 into the repository. E.g., we checked VER0001 out of the repository, edited it, and now want to check it back in. Or we left the original doc_01.xml where it was on our file system, edited that copy, and now want to check it in. Or our editor accessed the bytes in VER0001 directly from the repository, let us edit them, and now wants to create a new version in the repository. It doesn't matter how we come to have the changed version of doc_01.xml, the import implications are more or less the same.

The first steps of the import process are the same:

1. Process the XML document semantically in order to discover any relationships it expresses via links in order to determine the members of the bounded object set we need to import.

2. Process the compound document children of the root storage object, i.e., book.xsd. We determine that book.xsd has no import or include relationships to any other XSD documents

Assuming we haven't changed the schema reference or the "book" element's namespace, we get the same result BOS we did before: doc_01.xml and book.xsd.

3. For each member of the BOS, determine whether or not the repository already has a resource for which the BOS member should be a new version.

Now it gets interesting. First, we have to determine if our new doc_01.xml is really a version of resource RES0001. Remember: there's no general solution to this problem--you have to do something to either remember this information outside the repository or provide some heuristic for figuring it out when you need to or simply ask the user.

When I said above that it didn't matter how we came to have a new doc_01.xml that wasn't quite true because the way that we came to have it will likely determine how we know what version and resource it relates to in the repository.

If you use a check-out operation then you can capture the information about what version and resource you checked out, either as separate local metadata or embedded in the XML document (for example, as a processing instruction or attribute value). Putting the metadata in the document itself is safer because then you can't (easily) lose it but it limits you to managing XML data only (and it's not really safe because you can't prevent an author from modifying it if they really wanted to). Putting the metadata outside the document is more general but then requires a bit more work, either on the part of authors (they have to know where things are or should be on the file system) or in terms of some local data management facility to maintain the information. But this is the approach that CVS and Subversion use. It's simple and it works fine as long as users know that the limits are on their ability to do things like move files around.

If you are accessing the bytes of a storage object directly via an editor then the editor can just remember where it got them from. This works as long as the editor doesn't crash or, if when it does crash, it's cached the metadata away somewhere.

But it can still happen that you just get a file from somewhere and whoever gave it to you tells you "this should be a new version of resource RES0001". For example, somebody might have made some changes offline and mailed you the file. In this case, you, the human, have to figure out what to do.

Note too that in the general case you can't depend on things like filenames. While we usually do as a matter of practice there's no magic to it. If you look at the repository listing above you'll notice that the resources and versions both have name properties. At least in the SnapCM model, these names are arbitrary and need to be unique in any scope beyond the object itself (and an object can have multiple names--they're just metadata values as far as SnapCM is concerned). The invariant, unique identifiers of the objects are the object IDs (RES0001, VER0002, etc.). For versions, the ultimate identifier is the resource they are a version of.

For example, say you like to reflect the version of a file in the filename itself, a common practice when people are not using an actual versioning system. You find you've got directories full of files like "presentation_v1.ppt" and "presentation_v2.ppt" and "presentation_final_wek.ppt". The filenames may only be coincidently similar but you happen to know that they are all in fact different versions of the same resource, the presentation you were asked to write. In a repository like ours here you could import all these different versions and create them as versions of the same resource and they could keep their original names as their Name metadata value.

This is all to make the point that two storage objects are different versions of the same resource because we say they are and the general nature of the SnapCM model lets us say it however we want for whatever reason--there's no dependency on any particular storage organization or naming conventions or anything else. This means that you're free to apply the model to any particular way of organizing and naming things you happen to prefer. It also means that you can take any system of versioned information and recreate it exactly (in terms of the version-to-version and version-to-resource relationships) in a SnapCM repository.

Ok, back to our task. In this case we know that our local doc_01.xml is in fact a new version of resource RES0001.

Now we come to the schema, book.xsd. If we never exported it, meaning that we accessed it directly from the repository, then we will see that the pointer to it points back into the repository, that is, doc_01.xml as intially exported looks like this:
<?xml version="1.1"?>
<book xmlns="http://www.example.com/namespaces/book"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="/repository/resource/RES0002"
>
...
</book>
The importer can therefore know with certainty that we never created a new version (because versions inside the repository are invariant and cannot be changed) and therefore excludes it from the BOS to be imported. It's part of the BOS rooted at doc_01.xml, but since it's already in the repository we don't need to import it.

But if we had exported both doc_01.xml and book.xsd, such that doc_01.xml as exported looked like this:
<?xml version="1.1"?>
<book xmlns="http://www.example.com/namespaces/book"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="../schemas/book/book.xsd"
>
...
</book>
Then we've got a potential issue because we may or may not have modified the schema (possibly inadvertently if we, for example, opened it in an editor to see what it's rules were and as a side effect saved it, changing even just some whitespace).

The importer must now determine if it really needs to import a new version of book.xsd or not and, if it does, should it create it as a new resource or as a new version of an existing resource. How can it make this determination?

First, it can look to see if there is already a schema in the repository that governs the namespace "http://www.example.com/namespaces/book". It can make this determination by doing a query like "find all latest versions with root element 'http://www.w3.org/2001/XMLSchema:schema' and with targetNamespace value 'http://www.example.com/namespaces/book'". If this returns any versions then you know that you have at least one resource related to the target namespace that is an XSD schema (and not, for example, a RelaxNG schema).

If you get back more than one resource then you have a problem: either something screwed up on a prior import and created two resources for what should have been one resource or you have two truly different XSD documents that both target the same namespace. Now you have to figure out which one is the correct one to use before you can even decide whether or not to create a new version of it. How do you decide?

I find this to be a tough question. The challenge here is partly a function of the details of XSD in that you can choose to organize an XSD schema into multiple XML documents, all of which may name the same target namespace. But only one of them is the real root of the compound document, that is, only one of them can actually be used as the starting point for validating documents.

You might also have different variants of the same base schema for different purposes. For example, I have this case where I have one variant for publishing that defines global key constraints and another variant for authoring that does not, because for authoring the documents will be organized into many separate XML documents and XSD provides no way to constraint or validate cross-document references.

One way to handle this would be to use version metadata to indicate explicitly which of your XSD documents are schema roots and which are not. Another way would be to put that inside the schema as an attribute on the schema element or a subelement in your own namespace or whatever. And of course you could do both, with your XSD importer using the embedded metadata to set the storage object metadata.

But you should start to see that this is the first place where we are forced to integrate the repository with our local and non-standard business rules and that the knowledge and implementation of those business rules is in...wait for it...the importer.

It should also be clear at this point that no out-of-the-box XML-aware importer is going to do the thing you need except by accident or if you modified your policies in order to fit what the tool does. If the tool you choose happens to match what you already do or what you're happy to do, then great, you chose well, buy the engineers who built it a beer and go on your way. But if it doesn't....

Another approach would be to limit yourself to having exactly one XSD document per governed namespace. This is the easiest solution and a lot of times you can do it but it's not realistic as a general practice for the reasons given above.

OK, so schemas (and not just XSD schemas, any form of schema) complicate things.

So where were we? Oh yeah, is our book.xsd a new resource, a new version of an existing resource, or already in the repository?

In our current example there is only one version that governs the namespace so we only need to determine if we need to import our local copy. Here we have to look to see if it's been modified locally. If the local copy has not been modified, which we can know if we captured the time it was checked out at (this is what CVS does) and compare that to the last-modified time stamp on the file, then we know we don't need to import it. If it has been modified, or at least the timestamp has changed or if we didn't capture that (somebody just sent us a bunch of files and said load these up), then our only choice is to do some form of diff against the version in the repository.

We could just do a simple byte compare, which is easy to implement but for XML we might want to be more sophisticated and use an XML-aware diffing engine so we don't commit new versions that differ only in things like whitespace within markup. Again, this is a function of the importer and you, the importer implementor, get to choose how sophisticated you make it. For something simple like XIRUSS-T you can expect at most a simple byte-level diff. For a commercial system that claims XML awareness you should expect some sort of XML differencing that you can configure. Or you might just have to figure it out yourself by asking somebody or looking at the files or guessing.

OK, in our case we do a simple byte compare and determine that the file we have locally and the one in the repository are identical, so no need to create a new version.

3.1 In temporary storage (or in the process of streaming the input bytes into the newly-created version objects) rewrite all pointers to reflect the locations of the target resources or versions as they will be within the repository.

This is just like the last time.

3.2 For each BOS member, identify the relevant metadata items and create each one as a metadata item on the appropriate newly-created repository object.

Ditto

4. Having constructed our empty storage-object-to-version map, we execute the import process. In this case, we will construct the following new objects in the repository:

- Version object VER0003, the next version of VER0001 (and by implication, a version of resource RES0001)

- Dependency object DEP0002 from version VER0003 to resource RES0002, reflecting the governed-by relationship between doc_01.xml and book.xsd.

The new state of the repository is:
/repository/resources/RES0001  - name: "doc_01.xml"; initial version: VER0001
/repository/resources/RES0002 - name: "book.xsd"; initial version: VER0002
/versions/VER0001 - name: "doc_01.xml"; Resource: RES0001
prev_versions: {none}
next_versions: VER0003
dependency: DEP0001
namespaces: http://www.example.com/namespaces/book
root element type: "book"
mime type: application/xml
xml version: 1.1
encoding: UTF-8
/VER0002 - name: "book.xsd"; Resource: RES0002
prev_versions: {none}
next_versions: {none}
root element type:
"http://www.w3.org/2001/XMLSchema:schema"
namespaces: http://www.w3.org/2001/XMLSchema
target namespace:
http://www.example.com/namespaces/book
mime type: application/xml
xml version: 1.0
encoding: UTF-16
VER0003 - name: "doc_01.xml"; Resource: RES0001
prev_versions: VER0001
next_versions: {none}
dependency: DEP0002
namespaces: http://www.example.com/namespaces/book
root element type: "book"
mime type: application/xml
xml version: 1.1
encoding: UTF-8
/dependencies/DEP0001 - Target: RES0002; policy: "latest"
Dependency type: "governed by"
/dependencies/DEP0002 - Target: RES0002; policy: "latest"
Dependency type: "governed by"

Notice a few new things in this listing:

- I've added the prev/next version pointers to the versions. In SnapCM, each version can have more than one previous or next version where different versions are organized into different "branches", which I haven't talked about yet (our current repository is a repository with exactly one branch, if you want to be precise about it).

- There are two dependency objects which appear to be identical by the metadata shown. However, each dependency is owned by the version that uses it (it's really an exclusive property of the version) and its metadata is not invariant. In particular, you are likely to want to change the resolution policy for a given version as the state of the repository changes, as we'll see in a moment. Of course, a real implementation could transparently normalize the dependency objects so it only maintained instances that actually varied in their properties, creating new instances as necessary. But that's optimization we don't need to worry about here. [You may be starting to see the method in my madness: if I can think of a way it could be optimized I don't worry about reflecting that optimization in the abstract model, because I'm confident that if that optimization is needed it can be added to the implementation.]

- Except for maybe doing a diff on import, we've said nothing about the data content of the versions. That's because, for most purposes the data content is really secondary and arbitrary. There's nothing about the functioning of the repository itself (as opposed to the importer, which is all about the data) that has any direct knowledge of or dependency on the data inside the storage objects. You can think of the repository as a Swiss bank: it doesn't know and it doesn't want to know. Knowing is somebody else's job. By the same token, there are lots of types of versions that are only collections of simple metadata values and are not storage objects at all.

OK, so now we've successfully committed a new version of doc_01.xml into the repository, we correctly did not create an unnecessary new version of the schema. We did a good day's work, let's go home.

OK, not so fast.

We discovered that our schema is not complete with respect to our requirements and we have to add a couple of new element types or some attributes or whatever. The point is we have to modify it. We also discover that one of our existing content models is wrong wrong wrong and that we have to change it in a way that will make existing documents invalid. Doh!

So we check out version VER0002 to create a local copy of book.xsd. We edit it to change the content model, and go to commit it back to the repository.

But wait--if we do that, what will happen?

By default, all the dependency links from documents to their governing schemas use the "latest" policy. If we commit a new version we will effectively break those documents even though they are, today, valid against the current latest version of book.xsd in the repository. What do we do?

This is a matter of policy. You could choose to invalidate all the documents and require that they all be edited to make them valid. Sometimes that's the right thing to do based on whatever your local requirements are.

Or you could do this:

1. Find all the dependencies that point to schema book.xsd: "find all dependency objects of type 'governed by' that point to resource RES0002"

2. For each dependency, change its resolution policy from "latest" to "Version VER0002".

This changes the dependencies from being dynamic, resolution-time pointers to hardened version-specific pointers. Notice to that we didn't do anything to the versions involved.

Now, lets refine this operation a little bit by saying that, as a matter of our policy, we want to harden the links to schemas for all versions that are not the latest version of their resource. That is, we don't want to break any old versions but we do want to break the latest so that we know we have to fix it.

That means that for dependency DEP0001 we will change the policy to "Version VER0002" but for DEP0002 we will not. In addition, we will add a metadata value to the latest versions to indicate that we know they are not (or probably not) valid against their schema [I know I said that version metadata is invariant but actually some is and some isn't depending on the semantics of the metadata {or you can imagine that we created a new version to reflect the new metadata, updated the repository to reflect it and went on--since I have to type the repository state by hand, let's just say we can change version metadata.].

The new state of the repository is:
/repository/resources/RES0001  - name: "doc_01.xml"; initial version: VER0001
/repository/resources/RES0002 - name: "book.xsd"; initial version: VER0002
/versions/VER0001 - name: "doc_01.xml"; Resource: RES0001
prev_versions: {none}
next_versions: VER0003
dependency: DEP0001
namespaces: http://www.example.com/namespaces/book
root element type: "book"
mime type: application/xml
xml version: 1.1
encoding: UTF-8
is schema valid: true
/VER0002 - name: "book.xsd"; Resource: RES0002
prev_versions: {none}
next_versions: {none}
root element type:
"http://www.w3.org/2001/XMLSchema:schema"
namespaces: http://www.w3.org/2001/XMLSchema
target namespace:
http://www.example.com/namespaces/book
mime type: application/xml
xml version: 1.0
encoding: UTF-16
VER0003 - name: "doc_01.xml"; Resource: RES0001
prev_versions: VER0003
next_versions: {none}
dependency: DEP0002
namespaces: http://www.example.com/namespaces/book
root element type: "book"
mime type: application/xml
xml version: 1.1
encoding: UTF-8
is schema valid: false
/dependencies/DEP0001 - Target: RES0002; policy: "Version VER0002"
Dependency type: "governed by"
/dependencies/DEP0002 - Target: RES0002; policy: "latest"
Dependency type: "governed by"

Let's think about what we've done:

- We've used the indirection of the dependency links to change or preserve the processing result of the XML documents even though we didn't change the documents themselves. For the old version of doc_01.xml we preserved our ability to process it as a valid document by explicitly binding it to the latest version of book.xsd against which it was validated. For the new version of doc_01.xml we made the conscious choice to allow it to become invalid when we commit the new version of book.xsd.

- We added a new metadata value, "is schema valid" that allows us to capture information about the documents that reflects some aspect of their processing. In this case we're setting it because we know we're about to make it true, but you could imagine that we have a process that gets every latest XML document that is not a schema, validates it against its schema, and records the result in the "is schema valid" property. This could then drive a Layer 3 workflow application that every morning sends a report listing all the XML documents that are not valid. Or we could do a validation on import and indicate the result there. Whatever. The point is we've added more metadata that is specific to our business processes and policies.

Now that we've made the repository safe for a new schema version, we import our updated book.xsd document using the same process as before. The new state of the repository is:
/repository/resources/RES0001  - name: "doc_01.xml"; initial version: VER0001
/repository/resources/RES0002 - name: "book.xsd"; initial version: VER0002
/versions/VER0001 - name: "doc_01.xml"; Resource: RES0001
prev_versions: {none}
next_versions: VER0003
dependency: DEP0001
namespaces: http://www.example.com/namespaces/book
root element type: "book"
mime type: application/xml
xml version: 1.1
encoding: UTF-8
is schema valid: true
/VER0002 - name: "book.xsd"; Resource: RES0002
prev_versions: {none}
next_versions: VER0004
root element type:
"http://www.w3.org/2001/XMLSchema:schema"
namespaces: http://www.w3.org/2001/XMLSchema
target namespace:
http://www.example.com/namespaces/book
mime type: application/xml
xml version: 1.0
encoding: UTF-16
VER0003 - name: "doc_01.xml"; Resource: RES0001
prev_versions: VER0003
next_versions: {none}
dependency: DEP0002
namespaces: http://www.example.com/namespaces/book
root element type: "book"
mime type: application/xml
xml version: 1.1
encoding: UTF-8
is schema valid: false
/VER0004 - name: "book.xsd"; Resource: RES0002
prev_versions: VER0002
next_versions: {none}
root element type:
"http://www.w3.org/2001/XMLSchema:schema"
namespaces: http://www.w3.org/2001/XMLSchema
target namespace:
http://www.example.com/namespaces/book
mime type: application/xml
xml version: 1.0
encoding: UTF-16
/dependencies/DEP0001 - Target: RES0002; policy: "Version VER0002"
Dependency type: "governed by"
/dependencies/DEP0002 - Target: RES0002; policy: "latest"
Dependency type: "governed by"

Now we we're starting to get some interesting stuff in the repository.

We have cross-document links (the links from the doc_01.xml documents to their schemas), we have version-aware link resolution, via the dependencies, we have both generic and business-process-specific metadata, and we have some sequences of versions in time.

We can also see that the repository itself stays remarkably simple--what you see here is not that far from what a fully-populated set of properties and objects would look like (as you can see if you run the XIRUSS-T application). You can also see that the repository state could easily be represented using a direct XML representation for export, archiving, or interchange (the storage object data streams could be held in the same XML or as separate storage objects on the file system).

But we've done some pretty sophisticated stuff what with intelligent handling of schema versions, managing our links using indirect, version-aware, policy-based pointers. How did we do it? We did it in the importer (and to a lesser degree, in the exporter), where all the complexity lies because that's where the specific knowledge of the data formats and their semantics and our local business objects, processes, and policies lie.

Let's talk about exporters for a minute.

I haven't said much about exporters because most of the complexity is in the importers because that's where you have to do all the initial syntactic and semantic processing to get the stuff into the repository. Getting it out is usually much easier.

In the best case there is no export at all: you access all storage objects directly from the repository without first copying them out to your local file system.

But in reality you will always need to do some exporting, if only for long-term, repository-independent archiving of your data (you do do that, right?).

For export, the main concern is rewriting of pointers on export so that the pointers point to the appropriate version of the correct resource in the correct location. As we saw above, this varies from doing nothing (if you are accessing the target object from the repository using the current resolution policy) to setting it to a relative URL that reflects where the target was copied to locally.

In addition, depending on how you manage the local file-to-version metadata on export, the exporter needs to set that metadata. Essentially, the exporter needs to have in its head a mapping from versions in the repository to their eventual locations, as exported, so it can then rewrite any pointers that need rewriting. This map is either explicit because the exporter creates it as it does the exporting or it's implicit in some file organization convention, the most obvious of which is that the export structure matches the directory (or folder or cabinet or whatever) structure in the repository.

Of course there's more that an exporter could do, such as creating Zip or tar packages of the exported files, loading the results into another repository, or whatever.

So exporters also have to be smart and they will also have some knowledge of the data formats to be exported (so they can, at a minimum, rewrite pointers) and local business rules and policies, but they are still much simpler than the corresponding importers and much of their work is probably already supported by facilities needed by the importer (such as XML attribute rewriting).

But we've now seen one complete cycle of the create-modify-create-new-version process, and once you can do one cycle you can do a million.

We still need to look a bit more closely at the implications of resolution of links via dependency objects. We also need to look at more linking cases, both for import and for processing. Finally, we need to look at the requirements and implications for rendition processing (that is, processing compound documents to produce a deliverable publication, such as PDF or HTML pages).

Next time: Linking and addressing with versioned hyperdocuments

Labels:

8 Comments:

Anonymous Anonymous said...

I had my "ah-ha!!" moment when you indicated creating a version-specific dependency and the "isitvalid" metadata. That makes a lot of sense. I was also glad to realize that the logic behind determining "is a new version of another asset a version or new object?" can be somewhat subjective. One of my fears is that a user without adequate training would get into a CVS and muck it up through ignorance.

I also very much like the idea of the ability of letting a completely different file be the next version of a prior file. There are sometimes aggregates of a document within a lifecycle which were inconsistent file types for each version(e.g. manuscript is MS Word in one version, then XML in another version, then press-ready PDF in another version).

My one unanswered question so far is the 2nd one from my comment on your last post. Let me clarify. Upon import of the XML document with a relative link, the importer renames the relative link to one appropriate to the repository (e.g. /repository/...). My question is on that atomic action. The modified XML becomes different than what was imported by the user, yet no new version is created. It's an internal process, I know, but--do you store the original relative path name inside the XML doc somewhere or does it not matter because the exporter re-writes the new relative path name when the asset is checked out or exported? And, if the system is making a change to the asset on import, why not make it a new version?

9:30 AM  
Blogger Eliot Kimber said...

I understand your question now.

The short answer is: you have to rewrite on import in order to ensure that the repository is in a consistent state with regard too all the data within the repository.

But, to address your concern, you are correct that the best thing to do is to remember the original location of the file as it was imported and to remember the exact form of the original pointer.

The first would be metadata of the version you create on import and XIRUSS-T does this by default. It's not 100% solution unless you also capture the machine it was imported from but it's convenient and will help authors now what they're looking at after import.

The second would be metadata of the dependency relationship you create in order to reflect the dependency represented by the original link. If the link is to a specific element you might also capture some details from the target element as dependency metadata just for convenience and as a way to avoid full-on element-level link management. But it's just a convenience--it doesn't have any bearing on the functionality of the dependency or your ability to rewrite the storage object pointer.

By the same token, you don't really need it because when you export again you have to be able to put the data anywhere and rewrite the pointers so that the data is correct as exported.

So there's really no point in creating a version that is the exact state of the originally-imported document. Whenever you use any repository you are knowingly giving over some degree of control over the data--even CVS will modify your documents on import to update CVS-specific tags you put in the document.

The key is that you should be able to make an informed choice about what form that changing takes. For example, if import also requires that I create new attributes on your elements, that's probably bad. Changing a pointer such that it still works is not modifying the semantics of your data, it's preserving it. Adding an attribute may well be changing the semantics and it's imposing requirements on your schema that you may not want to have imposed or may not even be able to allow.

For example, any repository that puts repository object IDs into documents is seriously suspect in my eyes. It suggests that the engineers haven't thought the issues through all the way and it grates to have anyone screw with my schema.

But I also realize that for certain problems (sometimes caused by the repository architecture, sometimes not) using attributes in this way is the easiest way to solve a problem. But I don't have to like it and I've gone out of my way to develop an architecture that doesn't require it (although it could take advantage of it).

11:27 AM  
Anonymous Anonymous said...

FREE Business Advertising Tips The Most Powerful Internet Classified Advertising Methods On The Web! "TOP" Rated Money Making Website! A Must See!!!

8:20 PM  
Anonymous Anonymous said...

Nice Blog. I will keep reading. Please visit my blog at:

The Internet Marketing Genius, Carael Knight

2:10 PM  
Anonymous Anonymous said...

Building Residual Income In QLx3 The Most Powerful Business Social On The Web for Building Residual Income!

11:45 PM  
Anonymous Anonymous said...

Work at home opportunites For people who are smart enough to go after there own dreams

Work from home opportunites

Women who have children will love it...

12:08 AM  
Anonymous Anonymous said...

This is a great Blog! But if you want to really
Work at home with a system that is as good and
as simple as owning your own ATM Cash Machine!
ATM CASH

1:57 PM  
Anonymous Anonymous said...

This is a great Blog! But if you want to really
earn money from home work with a system that is
good and is as simple as owning your own
ATM Cash Machine!
ATM CASH

4:07 PM  

Post a Comment

<< Home