Subscribe to Dr. Macro's XML Rants

NOTE TO TOOL OWNERS: In this blog I will occasionally make statements about products that you will take exception to. My intent is to always be factual and accurate. If I have made a statement that you consider to be incorrect or innaccurate, please bring it to my attention and, once I have verified my error, I will post the appropriate correction.

And before you get too exercised, please read the post, date 9 Feb 2006, titled "All Tools Suck".

Sunday, January 26, 2014

DITA without a CMS: Tools for Small Teams

[This is a copy of a post I made to the Yahoo DITA Users list.]

A topic of discussion that comes up quite a bit (it came up at the recent Central Texas DITA User Group meeting) is how to "do DITA" without a CMS, by which we usually mean, how to implement an authoring and production workflow for a small team of authors with limited budget without going mad?

NOTE: I'm using the term "CMS" to mean what are often called a Content Component Management (CCM) systems.

This is something I've been thinking about and doing for going on 30 years now, first at IBM and then as a consultant. At IBM we had nothing more than mainframes and line-oriented text editors along with batch composition systems yet we were able to author and manage libraries of books with sophisticated hyperlinks within and across books and across libraries. How did we do it? Mostly through some relatively simple conventions for creating IDs and references to them and a bit of discipline on the part of writing teams. We were using pre-SGML structured markup back then but the principles still apply today.

As I say in my book, DITA for Practitioners, some of my most successful client projects have not had a CMS component.

Note that I'm saying this as somebody who has worked for and still works closely with a major CMS vendor (RSI Content Solutions). In addition, as a DITA consultant who wants to work with everybody and anybody, I take some risk saying things like this since a large part of the DITA tools market revolves around CMS systems (as opposed to editors or composition systems, where market has essentially resolved on a small set of mature providers that are unlikely to change anytime soon).

So let me make it clear that I'm not suggesting that you never need a CMS--in an enterprise context you almost certainly do, and even in smaller teams or companies, lighter-weight systems like EasyDITA, DITAToo, Componize, BlueStream XDocs, and DocZone can offer significant value within the limits of tight small-team budgets.

But for many teams the cost of a CMS will always be prohibitive, either for cost or time or both, especially at the start of projects. So there is a significant part of the DITA user community for whom CMS systems are either an unaffordable luxury or something to be worked toward and justified once an initial DITA project proves itself.

In addition, an important aspect of DITA is that you can get started with DITA and be productive very quickly without having to first but a CMS in place. Even if you know you need to have a CMS, you can start small and work up to it. I have seen many documentation projects fail because too much emphasis was put on implementing the CMS first.

Many people get the idea, for whatever reason, that a CMS is cost of entry for implementing DITA and that is simply not the case for many DITA users.

The net of my current thinking is that this tool set:
  • git for source content management
  • DITA Open Toolkit for output processing
  • Jenkins for centralized and distributed process automation
  • oXygenXML for editing and local production
allows you to implement an almost-complete, low-cost DITA authoring, management, and production system. Of these four tools, only one, oXygenXML, is commercial. If you use Github to host private repositories that has a cost but it's minimal.

In particular, the combination of git, Jenkins, and the Open Toolkit enables easy implementation of centralized, automatic build processing of DITA content. Platform-as-a-service (PaaS) providers like CloudBees, OpenShift, and Amazon Web Services provide free and low-cost options for quickly setting up central servers for things like Jenkins, web sites, and so on, with varying degrees of privacy and easy-to-scale options.

The key here is low dollar cost and low labor investment to get something up and running quickly.

This doesn't include the effort needed to customize and extend the OT to meet your specific output needs--that's a separate cost dependent entirely on your specific requirements. But the community continues to improve its support for doing OT customization and the tools are continually improving, so that should get easier as time goes on (for example, Leigh White's DITA for Print book from XML Press makes doing PDF customization much easier than it was before--it's personally saved me many hours in my recent PDF customization projects).

For each of these tools there are of course suitable alternatives. I've specified these specific tools because of their ubiquity, specific features, and ease of use. But the same approach and principles could be applied to other combinations of comparable tools.

OK, so on to the question of when must you have a CMS and when can you get by without one?

A key question is what services do CMS systems provide and how critical are they and what alternatives are available?

As in all such endeavors it's a question of understanding your requirements and matching those requirements to appropriate solutions. For small tech doc teams the immediate requirements tend to be:
  1. Centralized management of content with version control and appropriate access control
  2. Production of appropriate deliverables
  3. Increased reuse to reduce content redundancy
  4. Localization
Given that understanding of very basic small-team requirements, how do the available tools align to those requirements?

Since the question is CMS vs. ad-hoc systems built from the components described above, the main question is "What do CMS systems do and how do the ad-hoc tools compare?"

CMS systems should or do provide the following services:

1. Centralized storage of content. For many groups just getting all their content into a single storage repository is a big step forward, moving things off of people's personal machines or out of departmental storage silos.

2. Version management. The management of content objects as versions in time.

3. Access control. Providing controls over who can do what with different objects under what conditions.

4. Metadata management for content objects. The ability to add custom metadata to objects in order to enable finding or support specific business processes. This includes things like classification metadata, ownership or rights metadata, and metadata specific to internal business processes or data processing.

5. Search and retrieval of content objects. The ability to search for and reliably find content objects based on their content, metadata, workflow status, etc.

6. Management of media assets. The ability to manage non-XML assets (images, videos, etc.) used by the main content objects. This typically includes support for media object metadata, format conversion, support for variants, streaming, and so on. Usually includes features to manage the large physical data storage required. Sometimes provided by dedicated Digital Asset Management (DAM) systems.

7. Link management. Includes maintaining "where used" information about content and media assets, management of addressing details, and so on.

8. Deliverable production. Managing the generation of deliverables from content stored in the CMS, e.g. running the Open Toolkit or equivalent processes.

These are all valuable features and as the volume of your content increases, as the scope of collaboration increases, and as the complexity of your re-use and linking increases, you will certainly need systems that provide these features. Implementing these services completely and well is a hard task and commercial systems are well worth the cost once you justify the need. You do not want to try to build your own system once you get to that point.

In any discussion like this you have to balance the cost of doing it yourself with the cost of buying a system. While it's easy to get started with free or low-cost tools, you can find yourself getting to a place where the time and labor cost of implementing and maintaining a do-it-yourself system is greater than the cost of licensing and integrating a commercial system. Scope creep is a looming danger in any effort of this scope. Applying agile methods and attitudes is highly recommended.

The nice thing about DITA is that, if you don't do anything too tool-specific, you should be able to transition from a DIY system to a commercial one with minimum abuse to your content. That's part of the point of XML in general and DITA in particular.

Also, keep in mind that DITA has been explicitly architected from day 1 to not require any sort of CMS system--everything you need to do with DITA can be done with files on the file system and normal tools. There is no "magic" in the DITA design (although it may feel like it to tool implementors sometimes).

So how far can you get without a dedicated CMS system?

I suggest you can get quite a long ways.

Services 1, 2, and 3: Basic data management

The first three services: centralized storage, version management, and access control, are provided by all modern source code management (SCM) tools, e.g. Subversion, git, etc. With the advent of free and low-cost services like Github, there is essentially zero barrier to using modern SCM systems for managing all your content. Git and Github in particular make it about as easy as it could be. You can use Github for free for public repositories and at a pretty low cost for private repositories or you can easily implement internal centralized git repositories within an enterprise if you have a server machine available. There are lots of good user interfaces for git, including Github's own client as well as other open-source tools like SourceTree.

Git in particular has several advantages:
  • It is optimized to minimize redundant storage and network bandwidth. That makes it suitable for managing binaries as well as XML content. Essentially you can just put everything in git and not worry about it.
  • It uses a distributed repository model, in which each user can have a full copy of the central repository to which they can commit changes locally before pushing them to the central repository. This means you can work offline and still do incremental commits of content. Because git is efficient with storage and bandwidth, it's practical to have everything you need locally, minimizing dependency on live connections to a central server.
  • Its branching model makes working with complex revision workflows about as easy as it can be (which is not very but it's an inherently challenging situation).

Service 4: Metadata Management

Here DITA provides its own solution in that DITA comes out of the box with a robust and fully extensible metadata model, namely the element. You can put any metadata you need in your maps and topics, either by using <data> with the @name attribute or by creating simple specializations that add new metadata element types tailored to your needs. For media assets you can either create key definitions with metadata that point to media objects or use something like the DITA for Publishers <art> and <art-ph> elements to bind <data> elements to references to media objects (unfortunately, the element does not allow <data> as a direct child through DITA 1.3).

In addition, you can use subject schemes and classification maps to impose metadata onto anything if you need to.

It is in the area of metadata management that CMS systems really start to demonstrate their value. If you need sophisticated metadata management then a CMS is probably indicated. But for many small teams, metadata management is not a critical requirement, at least not at first.

Service 5: Search and Retrieval

This is another area where CMS systems provide obvious value. If you have your content in an SCM it probably doesn't provide any particular search features.

But you can use existing search facilities, including those built into modern operating systems and those provided by your authoring tools (e.g., Oxygen's search across files and search within maps features). Even a simple grep across files can get you a long way.

If you have more implementation resources you can also look at using open-source full-text systems or XML databases like eXist and MarkLogic to do searching. It takes a bit more effort to set up but it might still be cheaper than a dedicated CMS, at least in the short term.

If your body of content is large or you spend a lot of time trying to find things or simply determining if you do or don't have something, a commercial CMS system is likely to be of value. But if your content is well understood by your authors, created in a disciplined way, and organized in a way that makes sense, then you may be able to get by without dedicated search support for quite a long time.

In addition, you can do things with maps to provide catalogs of components and so on. Neatness always counts and this is an area where a little thought and planning can go a long way.

Service 6: Management of Media Objects

This depends a lot on your specific media management requirements, but SCM systems like git and Subversion can manage binaries just fine. You can use continuous integration systems like Jenkins and open-source tools like ImageMagick to automate format conversion, metadata extraction, and so on.

If you have huge volumes of media assets, requirements like rights management, complex development workflows, and so on, then a CMS with DAM features is probably indicated.

But if you're just managing images that support your manuals, you can probably get by with some well-thought-out naming and organizational conventions and use of keys to reference your media objects. 

Service 7: Link Management

This is the service where CMS systems really shine, because without a dedicated, central, DITA-aware repository that can maintain real-time knowledge of all links within your DITA content, it's difficult to answer the "where-used" question quickly. It's always possible to implement brute-force processing to do it, but SCM systems like Subversion or git are not going to do anything for you here out of the box. It's possible to implement commit-time processing to capture and update link information (which you could automate with Jenkins, for example,) but that's not something a typical small team is going to implement on their own.

On the other hand, by using clear and consistent file, key, and ID naming conventions and using keys you can make manual link management easier--that's essentially what we did at IBM all those years ago when all we had were stone knives and bear skins. The same principles still apply today.

An interesting exercise would be to use a Jenkins job to maintain a simple where-used database that's updated on commit to the documentation SCM repository. It wouldn't be that hard to do.

8. Deliverable production

CMS systems can help with process automation and the value of that is significant. However, my recent experience with setting up Jenkins to automate build processes using the Open Toolkit makes it clear that it's now pretty easy to set up DITA process automation with available tools. It takes no special knowledge beyond knowing how to set up the job itself, which is not hard, just non-obvious.

Jenkins is a "continuos integration" (CI) server that provides general facilities for running arbitrary processes trigged by things like commits to source code repositories. Jenkins is optimized for Java-based projects and has built-in support for running Ant, connecting to Subversion and git repositories, and so on. This means you can have a Jenkins job triggered when you commit objects to your Subversion or git repository, run the Open Toolkit or any other command you can script, and either simply archive the result within the Jenkins server or transfer the result somewhere else. You can implement various success/failure and quality checks and have it notify you by email or other means when something breaks. Jenkins provides a nice dashboard for getting build status, investigating builds, and so on. Jenkins is an open-source tool that is widely used within the Java development community. It's easy to install and available in all the cloud service environments. If your company develops software it's likely you already use Jenkins or an equivalent CI system that you could use to implement build automation.

My experience using CloudBees to start setting up a test environment for DITA for Publishers was particularly positive. It literally took me minutes to set up a CloudBees account, provision a Jenkins server, and set up a job that would be able to run the OT. The only thing I needed to do to the Jenkins server was install the multiple-source-code-management plugin, which just means finding it in the Jenkins plugin manager and pushing the "install" button. I had to set up a github repository to hold my configured Open Toolkit but that also just took a few minutes. Baring somebody setting up a pre-configured service that does exactly this, it's hard to see how it could be much easier.

I think that Jenkins + git coupled with low-cost cloud services like CloudBees really changes the equation, making things that would otherwise be sufficiently difficult as to put off implementation easy enough that any small team should be able to do it well within the scope of resources and time they have.

This shouldn't worry CMS vendors--it can only help to grow the DITA market and help foster more DITA projects that are quickly successful, setting the stage for those teams to upgrade to more robust commercial systems as they're requirements and experience grows. Demonstrating success quickly is essential to the success of any project and definitely for DITA projects undertaken by small Tech Doc teams who always have to struggle for budget and staff due to the nature of Tech Doc as a cost center. But a demonstrated DITA success can help to make the value of high-quality information clearer to the enterprise, creating opportunities to get more support to do more, which requires more sophisticated tools.

Labels:

Sunday, August 11, 2013

Monastic SGML: 20 Years On

In 1993 I was working at IBM with Wayne Wohler, Don Day, Simcha Gralla, and others on IBM ID Doc, the SGML replacement for IBM's GML-based Bookmaster application, which was used for all of IBM's product documentation and much of its internal documentation. Wayne worked for IBM Publishing solutions and had been one of the developers of IBM's SGML processing tool set, having taken Charles Goldfarb's original SGML parser implementation and reworked it into something appropriate for an IBM product (Charles was also an IBM employee during the time he developed the SGML standard and HyTime). Wayne had also been involved in various efforts to develop or adapt visual editors for editing GML and SGML. At the time, Wayne and I were also developing the specifications for a general authoring support system that would manage SGML, allow editing, and so on.

IBM had been doing pretty sophisticated content reuse even back in the 80's using what facilities there were in the IBM Document Composition Facility (DCF), which was the underpinning for the Bookmaster application. So we understood the requirements for modular content, sharing of small document components among publications, and so on.

We were also trying to apply the HyTime standard to IBM ID Doc's linking requirements and I was starting to work with Charles Goldfarb and Steven Newcomb on the 2nd edition of the HyTime standard.

Out of that work we started to realize that SGML, with its focus on syntax and its many features designed to make the syntax easy to type, made SGML difficult to process in the context of things like visual editors and content management systems, because they imposed sequential processing requirements on the content.

We started to realize that for the types of applications we were building, a more abstract, node-based way of viewing SGML was required and that certain SGML features got in the way of that.

Remember that this was in the early days of object-oriented programming so the general concept of managing things as trees or graphs of nodes was not as current as it is now. Also, computers were much less capable, so you couldn't just say "load all that stuff into memory and then chew on it" because the memory just wasn't there, at least not on PCs and minicomputers. For comparison, at that time, it took about 8 clock hours on an IBM mainframe to render a 500-page manual to print using the Bookmaster application. That was running over night when the load on the mainframe was relatively low.

Out of this experience Wayne and I developed the concept of "monastic SGML", which was simply choosing not to use those features of SGML that got in the way of the kind of processing we wanted to do.

We presented these ideas at the SGML '93 conference as a poster. That poster, I'm told, had a profound effect on many thought leaders in the SGML community and helped start the process that led to the development of XML. I was invited by Jon Bosak to join the "SGML on the Web" working group he was forming specifically because of monastic SGML (I left IBM at the end of 1993 and my new employer, Passage Systems, generously allowed me to both continue my SGML and HyTime standards work and join this new SGML on the Web activity, as did my next employer, ISOGEN, when I left Passage Systems in 1996).

For this, the 20th anniversary of the presentation of monastic SGML to world, Debbie Lapeyre asked if I could put up a poster reflecting on monastic SGML at the Balisage conference. I didn't have any record of the poster with me and Debbie hadn't been able to find one in years past, but I reached out to Wayne and he dug through his archives and found the original SGML source for the poster. I've reproduced that below. I was able to post the original monastic SGML poster. These are my reflections.

The text of the poster is here:

Monastic SGML

Objective

Facilitate reuse of document fragments by enabling more reliable validation of document fragments without knowing all contexts in which they are used. Secondary objective: Remove sequential processing biases from datastream whereever possible.

Assumptions

Document fragments contain a single element and its content representing a proper subtree of a document and this element is valid in every point at which the fragment is referenced.

Rules

  • Don't use inclusions except on the root element, don't use exclusions
    Inclusions and exclusions can have the effect of invalidating the content of an element in one context while it remains valid in another.
  • Do not define short reference maps in the DTD
    Short references can change the recognition of delimiters based on context which can make a fragment invalid in one context while not in another.
    Other reasons to avoid them:
    • If USEMAP declarations occur in an instance, they are inherently sequential.
    • Short references can be used to obscure the true meaning of the markup in a given context.
  • Don't use #CURRENT attributes in the DTD
    #CURRENT attribute's use of values from prior specifications can make the first occurance of a fragment invalid.
    Other reasons to avoid them:
    • This construct is inherently sequential.
  • Avoid the use of IGNORE/INCLUDE marked sections
    These marked section types make it impossible to validate the information without
    • knowing all valid combinations of conditions for all using document
    • modifying all using documents to set these conditions

If you compare XML to these rules, you can see that we certainly applied them to XML, and a lot more.

Inclusions and exceptions were a powerful, if somewhat dangerous feature of SGML DTDs, in which you could define a content model and then additionally either allow elements types that would be valid in any context descending from the element being declared (inclusions) or disallow elements types from any descendant context (exclusions). Interestingly, RelaxNG has almost this feature because you can modify base patterns to either allow additional things or disallow specific things, the difference being that the inclusion or exception only applies to the specific context, not to all descendant contexts, which was the really evil part of inclusions and exceptions. Essentially, inclusions and exceptions were a syntactic convenience that let you avoid more heavily-parameterized content models or otherwise having to craft your content models for each element type.

In DITA, you see this reflected in the DTD implementation pattern for element types where every element type's content model is fully parameterized in a way that allows for global extension (domain integration) and relatively easy override (constraint modules that simply redeclare the base content-model-defining parameter entity). DocBook and JATS (NLM) have similar patterns.

Short references allowed you to effectively define custom syntaxes that would be parsed as SGML. It was a clever feature intended to support the sorts of things we do today with Wiki markup and ASCII equation markup and so on. In many cases it allowed existing text-based syntaxes to be parsed as SGML. It was driven by the requirement to enable rapid authoring of SGML content in text editors, such as for data conversion. That requirement made sense in 1986 and even in 1996, but is much less interesting now, both because ways of authoring have improved and because there are more general tools for doing parsing and transformation that don't need to be baked into the parser for one particular data format. At the time, SGML was really the only thing out there with any sort of a general-purpose parser.

One particularly pernicious feature of shortref was that you could turn it on and off within a document instance, as we allude to in our rules above. This meant that you had to know what the current shortref set was in order to parse a given part of the document. That works fine for sequential parsing of entire documents, but fails in the case of parsing document fragments out of any large document context.

The #CURRENT default option for SGML attributes allowed you say that the last specified value for the attribute should be used as the default value. This feature was problematic for a number of reasons, but it definitely imposed a sequential processing requirement on the content. This is a feature we dropped from XML without a second thought, as far as I can remember. The semantics of attribute value inheritance or propagation are challenging at best, because they are always dependent on the specific business rules of the vocabulary. During the development of HyTime 2 we tried to work out some general mechanism for expressing the rules for attribute value propagation and gave up. In DITA you see the challenge reflected in the rules for metadata cascade within maps and from maps to topics, which are both complex and somewhat fuzzy. We're trying to clarify them in DITA 1.3 but it's hard to do. There are many edge cases.

XML still has include and ignore marked sections, but only in DTD declarations. In SGML they could go in document instances, providing a weak form of conditional processing. But for obvious reasons, that didn't work well in an authoring or management context. Modern SGML and XML applications all use element-based profiling, of course. Certainly once SGML editors like Author/Editor (now XMetal) and Arbortext ADEPT (now Arbortext Editor) were in common use, the use of conditional marked sections in SGML content largely went away.

Looking at these rules now, I'm struck by the fact that we didn't say anything about DTDs in general (that is, the requirement for them) nor anything about the use of parsed entities, which we now know are evil. We didn't say anything about markup minimization, which was a large part of what got left out of XML. We clearly still had the mind set that DTDs were either a given or a hard requirement. We no longer have that mind set.

SGML did have the notion of "subdoc" but it wasn't fully baked and it never really got used (largely because it wasn't useful, although well intentioned). You see the requirement reflected today in things like DITA maps and conref, XInclude, and similar element-based, link-based use-by-reference features. The insight that I had (and why I think XInclude is misguided) is that use-by-reference is an application-level concern, not a source-level concern, which means it's something that is done by the application, as it is in DITA, for example, and not something that should be done by the parser, as XInclude is. Because it is processed by the parser, XInclude ends up being no better than external parsed entities.

If we look at XML, it retains one markup minimization feature from SGML, default attributes. These require DTDs or XSDs or (now) RelaxNGs that use the separate DTD compatibility annotations. Except for #CURRENT, which is obviously a very bad idea, we didn't say anything about attribute defaults. I think this reflects the fact that default attributes are simply such a useful feature that they must be retained. Certainly DITA depends on them and many other vocabularies do as well, especially those developed for complex documentation.

But I can also say from personal experience that defaulted attributes still cause problems for content management, since if you have a document that does not have all the attributes in the instance and, as for DITA, you require certain attributes in order to support specific processing (e.g., specialization-aware processing) then if you don't process your documents in the context of a schema that provides the attributes, processing will fail, sometimes apparently randomly and for non-obvious reasons (at least to those not familiar with the specific attribute-based processing requirements of the document).

I later somewhat disavowed monastic SGML because I felt it put an unnecessary focus on syntax over abstraction. As I further developed my understanding of abstractions of data as distinct from their syntactic representations, I realized that the syntax to a large degree doesn't matter, and that our concerns were somewhat unwarranted because once you parse the SGML initially, you have a normalized abstract representation that largely transcends the syntax. If you can then store and manage the content in terms of the abstraction, the original syntax doesn't matter too much.

Of course, it's not quite this simple if, for example, you need to remember things like original entity references or CDATA marked sections or other syntactic details so that you can recreate them exactly. So I think my disavowing may have been perhaps itself somewhat misguided. Syntax still matters, but it's not everything. At this year's Balisage there were several interesting papers focusing on the syntax/semantics distinction and, for example, defining general approaches for treating any syntax as XML and what that means or doesn't mean.

I for one do not miss any of the features of SGML that we left out of XML and am happy, for example, to have the option of not using DTDs when I don't need or want them or want to use some other document constraint language, like XSD or RelaxNG. Wayne and I were certainly on to something and I'm proud that we made a noticeable contribution to the development of XML.

For the historical record, here is the original SGML source for the poster as recovered from Wayne's personal archive:
<h1>Monastic SGML
<h5>Objective
<p>Facilitate reuse of document fragments by enabling more reliable
validation of document fragments without knowing all contexts in which
they are used.
Secondary objective&colon; Remove
sequential processing biases from datastream whereever possible.
<h5>Assumptions
<p>Document fragments contain a single element and its content
representing a proper subtree of a document and
this element is valid in every point at which the fragment is referenced.
<h2>Rules
<ul>
<li>Don't use inclusions except on the root element, don't use exclusions
<p>Inclusions and exclusions can have the effect of invalidating the
content of an element in one context while it remains valid in another.
<li>Do not define short reference maps in the DTD
<p>Short references can change the recognition of delimiters based on
context which can make a fragment invalid in one context while not in
another.
<p>Other reasons to avoid them:
<ul compact>
<li>If USEMAP declarations occur in an instance, they are inherently
sequential.
<li>Short references can be used
to obscure the true meaning of the markup in a given
context.
</ul>
<li>Don't use #CURRENT attributes in the DTD
<p>#CURRENT attribute's use of values from prior specifications
can make the first occurance of a fragment invalid.
<p>Other reasons to avoid them:
<ul compact>
<li>This construct is inherently sequential.
</ul>
<li>Avoid the use of IGNORE/INCLUDE marked sections
<p>These marked section types make it impossible to validate the
information without
<ul compact>
<li>knowing all valid combinations of conditions for all using
document
<li>modifying all using documents to set these conditions
</ul>
</ul>

Labels: , , , , ,

Sunday, February 20, 2011

Physical Improvement for Geeks: The Four Hour Body

I've just read through all of Tim Ferriss' The Four Hour Body (http://fourhourbody.com/) (4HB). Short version of review: found it really interesting and helpful and generally to be full of sound advice and guidance provided with a dose of humor. I am starting on the book's Slow Carb Diet (SCD) in an attempt to lose 20lbs of mostly visceral fat (read "lose my beer gut" and try to live to see my daughter graduate from college).

The book is written from a geek's perspective for geeks. It essentially takes an engineering approach to body tuning based on self experimentation, measurement, and application of sound scientific principles. In a post on the 4HB blog Tim captures the basic approach and purpose of the book:

"To reiterate: The entire goal of 4HB is to make you a self-sufficient self-experimenter within safe boundaries. Track yourself, follow the rules, and track the changes if you break or bend the rules. Simple as that. That’s what I did to arrive at my conclusions, and that’s what you will do — with a huge head start with the 4HB — to arrive at yours."

I've done Atkins in the past with some success so I know that for me a general low-carb approach will work. The Slow Carb Diet essentially takes Atkins and reduces it to the essential aspects that create change. The biggest difference between Atkins and the SCD is the SCD eliminates all dairy because of its contribution to insulin spiking despite a low glycemic index. So no cheese or sugar-free ice cream (which we got really good at making back in our Atkins days). The SCD also includes a weekly "cheat day" where you eat whatever crap you want, as much as you can choke down. After 6 days I've lost 3.5 lbs, which is about what I would expect at the start of a strict low-carb diet. I haven't had the same degree of mind alteration that I got from the Atkins induction process, which is nice, because that was always a pretty rough week for everybody.

What I found interesting about the 4HB was that Tim is simply presenting his findings and saying "this worked, this didn't, here's why we think this did or didn't work." He's not selling a system or pushing supplements or trying to sell videos. His constant point is "don't take my word for it, test it yourself. I might be spouting bullsh*t so test, test, test."

As an engineer that definitely resonated with me. He also spends a lot of time explaining why professional research is often useless, flawed, biased, or otherwise simply not helpful, if not downright counterproductive. As somebody who's always testing assumptions and asking for proof I liked that too.

He even has an appendix where he presents some data gathered from people who used the SCD, which, as presented suggested some interesting findings and made the diet look remarkably effective. He then goes through the numbers and shows why the numbers are deceptive and can't be trusted in a number of ways. If his intent was to sell the diet he would have just presented the numbers. Nice.

His focus is as much on the mental process as on the physical process: measure, evaluate, question, in short, think about what you're doing and why. Control variables as much as possible in your experiments.

I highly recommend the book for anyone who's thinking about trying to lose weight or improve their physical performance in whatever way they need to--Ferriss pretty much covers all bases, from simple weight and fat loss to gaining muscle, improving strength, etc.

He has two chapters focused on sexual improvements, one on female orgasm and one on raising testosterone levels, sperm count, and general libido in males. These could have come off as pretty salacious and "look what at what a sex machine I've become" but I didn't read them that way. Rather his point was that improving the sexual aspects of ones life is important to becoming a more complete person--it's an important part of being human so why not enjoy it to its fullest? I personally went through a male fertility issue when my wife and I tried to start a family and if I'd had the chapter on improving male fertility at that time (and if my fertility had actually been relevant) it would have been a godsend. One easy takeaway from that chapter: if you want kids don't carry an active cell phone in your pocket.

An interesting chapter on sleep: how to get better sleep, how to need less sleep, etc. Some interesting and intriguing stuff there as well. Some simple actions that might make significant positive changes in sleep patterns, as well as a technique for getting by on very little sleep if you can maintain a freaky-hard nap schedule.

Overall I found the book thoughtful, clearly written, engaging and entertaining and generally helpful. I found very few things that made me go "yeah right" or "oh please" or any of the reactions I often have to self help books. He stresses being careful and responsible and having a clear undestanding of what your goal is. In short, sound engineering practice applied to your physical self.

Dr. Macro says check it out.

Labels:

Saturday, February 19, 2011

Chevy Volt Adventure: Feb Diagnostic Report

Just got the February vehicle diagnostic report email from the Volt. I'm not sure why I find it so cool that my car can send me email, but I do.

The salient numbers are:

35 kW-hr/100 miles

1 Gallon of gasoline used. [This is actually an overstatement as we have only used 0.2 gallons since returning from our Houston trip at the end of December.]

Our electricty usage for January (the latest numbers I have) was (numbers in parens are for Jan 2010):

Total kW-hr: 954 (749)
Grid kW-hr: 723 (455)
Solar kW-hr: 231 (294)
Dollars billed: $58.37 ($35.12)

$/kWh used: $0.06 ($59.00/954)

kWh/mile: 0.35 (35kWh/100miles)

$/mile: $0.02

Our bill for Dec was $32.00, so we spent an extra $26.00 on electricity in January, some of which can be attributed to the unusually cold winter we've been having. We also produced about 60kWh less this January than last.

But if we assume that most of the difference was the Volt, that means it cost us about $20.00 to drive the vehicle for the month. We used essentially no gasoline so the electricity cost was our total operating cost.

Looking at the numbers it also means that the draw from the car is less than or roughly equal to the solar we produced over the same period. Not that much of that solar went to actually charging the Volt since we tend to charge later in the day or over night after having done stuff during the day, but if Austin Energy actually gave us market rates for our produced electricity rather than the steep discount they do give us, we could truthfully say we have a solar powered car, even in January. For contrast, our maximum solar production last year was 481 kWh in August, with numbers around 400 kWh most months.

Compare this cost with a gasoline vehicle getting 30 mpg around town at $3.00/gallon (current price here in Austin):

30 miles/gallon = 0.03 gallons/mile * $3.00/gallon =

$/mile: 0.09

However, our other car, a 2005 Toyota Solar only gets about 22 mpg around town, which comes out to

$/mile: 0.15

Of course these numbers only reflect direct operating cost, not the cost of our PV system or the extra cost of the Volt itself relative to a comparable gas-powered vehicle, but that's not the point is it? Because it's not just lowered operating cost but being a zero-emissions vehicle most days and using (or potentially using) more sustainable sources of energy.

But another interesting implication here is what would happen (or will happen) when the majority of vehicles are electric? If our use is typical, it means about a 25% increase in electricity consumption just for transportation. What does that mean for the electricity infrastructure? Would we be able in the U.S. to add 25% more capacity in say 10 years without resorting to coal? How much of that increase can be met through conservation? It seems like it could be a serious challenge for the already-straining grid infrastructure, something we know we need to address simply to make wind practical (because of the current nature of the U.S. grid).

If Chevy and the other EV manufacturers can bring the cost down, which they inevitably will, people are going to flock to these cars because they're fun to drive, cheaper to operate, and better for the air. Given the expected rate of advance in battery technology and the normal economies of scale, it seems reasonable to expect the cost of electric vehicles to be comparable to gasoline vehicles in about 5 years. If gas prices rise even $1.00/gallon in that time, which seems like a pretty safe bet (but then I would have expect gas to be at $5.00/gallon by now after it's spike back in 2008), then the attractiveness of electric vehicles will be even greater.

Which is all to say that I fully expect EVs like the Volt to catch on in a big way in about 5 years, which I think could spell, if not disaster, then at least serious strain in the U.S. electricity infrastructure. I know the City of Austin is thinking about it because that's their motivation for paying for our charging station: monitor the draw from the car so they can plan appropriately. But are we doing that a national level? I have no idea, but history does not instill confidence, let us say.

Labels:

Wednesday, January 19, 2011

Chevy Volt Adventure: Fun to Drive

We've been driving the Volt around town now for a few weeks and the biggest surprise to me is how much fun it is to drive. The instant acceleration, freaky smoothness, and weight-enhanced handling make it a lot of fun to drive. You can zip around, corner hard, and do it all without fuss or noise. And we haven't even tried sport mode yet.

As for the car itself, it seems to be holding up well--I haven't noticed anything particularly tinny or annoying, with the possible exception of the charge port cover, which seems a little weak but then it's just a little cover, but the latch is a little less aggressive than I'd like--a couple of times I've thought I pushed it closed but it hadn't caught.

We are clearly not driving in the most efficient manner because our full-charge electric range is currently estimated at about 30 miles, which our Volt Assistant at GM assures us reflects our profligate driving style and not an issue with reduced battery capacity.

As a family car it's working fine. With our around-town driving we've only had to use a fraction of a gallon of gas when we've forgotten to plug in after a trip. So our lifetime gas usage total is about 8.6 gallons, of which 8.5 were used on the round trip to Houston.

Labels:

Tuesday, January 04, 2011

Chevy Volt Adventure: Houston Trip 1

On Christmas Eve we loaded up the Volt and headed to Grandma's house in Houston.
IMG_0948
The picture shows the cargo area loaded for the trip. The cargo space is a little cramped but was able to accomodate what we needed for this trip, including all the gifts. It would be hard pressed to hold three full-sized rollaboards.

In the car we had me, my wife, our daughter, and our dog, Humphrey (a basset hound). Everyone was comfortable but this is definitely a 4-passenger vehicle because of the bucket seats in back. The seats were reasonably comfortable for a 3-hour trip, comparable to what I'm used to from our other car, a 2005 Toyota Solara convertible.

The total round trip from our house to Grandma's house is about 450 miles. The trip meter reports we used 8.1 gallons for a trip MPG of about 51, which is pretty good.

In our Solara, which averages about 22 MPG overall and gets probably 30 or so on the highway, we usually fill up at the halfway point out and back, using a full 15-gallon tank over the course of the trip. On this trip we didn't stop to fill up until the return, when the tank showed 3/4 empty. I put in about 6 gallons but I think the tank didn't fill (it was the first time I'd put gas in so I had no idea how much to expect to need—the tank must be 10 gallons if 3/4 reflected an 8-gallon deficit).

On the way out the battery lasted from Austin to just outside Bastrop, about 30 miles. It's clear that, as expected, highway speeds are less efficient than around-town speeds. I'd be interested to know what the efficiency curve is: is it more or less linear or, more likely, curves sharply up above say 50 MPH. My intuition says 40 MPH is the sweet spot. I tried to keep it between 60 and 70 for most of the trip (the posted limit for most of the trip is 70). I drove a little faster on the way home having realized that it didn't make much difference in efficiency.

Highway driving was fine. The car is heavy for its size, with the batteries distributed along the main axis, which makes it handle more like a big car than the compact it is. Highway 71 is pretty rough in places but the car was reasonably quiet at 70. When we left I-10 in Houston there was enough accumulated charge to use the battery for the couple of miles to my mother-in-law's house.

It definitely has power to spare and plenty of oomph. There's no hesitation when you stamp the accelerator and I had no problem going from 45 to 65 almost instantly to get from behind a slow car on I-10. We have yet to try the "sport" driving mode but now I'm almost afraid to.

The car is really smooth to drive--like driving an electric golf cart in the way it just smoothly takes off and doesn't make any noise.

If we had a problem it was the underbuilt electrical circuit at Grandma's that served the garage—at one point when we had the car plugged in and charging the circuit breaker flipped (a 15-amp circuit)—turned out the circuit also served most of the kitchen, where we were busy preparing Christmas dinner.

If there is any practical issue with the vehicle it's the climate control—it takes a lot of energy to heat it. Houston was having a cold snap so we got to test the heating system. The multi-position seat heaters are nice but keeping the controls on the "econ" setting meant that backseat passengers sometimes got a little chilled. You do realize how much waste heat gas engines produce when you don't have it available to turn your car into a sauna.

It was also weird to get back from a drive and realize that the hood is still cold.

We spent the last week traveling in the Northwest and rented the cheapest car Enterprise offers, which turned out to be a Nissan Versa, a tinny little econobox. The contrast was dramatic and made me appreciate the Volt. The two vehicles are comparable in size and capacity (but not cost, of course), but the Versa had a hard time making it up to highway speed and sounded like the engine might come out or explode under stress or blow off the road in a stiff breeze.

Now that we're back to our normal workaday life we'll see how it does in our normal around-town driving, but my expectation is that we'll use very little, if any, gas as we seldom need to go more than 10 miles from home (our longest usual trip is up north to Fry's, which is about a 20-mile round trip). We'll probably take it out to Llano and Lockhart for BBQ if we get a warm weekend in the next month or so.

On the way back from Houston we ended up near a Prius and ran into them at the gas station. They were interested in how the Volt was working and we got to compare MPG and generally be smug together. I ended up following them the rest of the way into Austin, figuring they probably reflected an appropriately efficient speed.

And I'm still getting a kick out of plugging it in whenever I bring it back home.

Labels:

Tuesday, December 21, 2010

Chevy Volt Adventure

My family is now the second (in Texas or Austin, not 100% sure) to take delivery of a 2011 Chevy Volt. We got it last night and it's sitting in the carport happily charged.

The car is very cool, very high tech. It sends you status emails. It chides you for jackrabbit starts (although I gather other electric and hybrid vehicles do as well).

It is freaky quiet in electric mode, a bit rumbly in extended mode.

The interior is pretty nice, reasonably well laid out, nicely detailed. The back seat is reasonably comfortable (I have the torso of a 6-foot person and my head cleared the back window).

Accelerates snappily in normal driving mode (haven't had a chance to try the "sport" mode yet). Handles pretty nicely (the batteries are stored along the center length of the vehicle, giving it pretty good balance).

We'll be driving it to Houston, about 500 miles round trip, in a couple of days. I'll report our experience.

Early adopters get some perks. We get 5 years of free OnStar service. We get a free 240v charging station from the City of Austin at the cost of letting them monitor the energy usage of the charger. We get a special parking space at the new branch library near us. The Whole Foods flagship store has charging stations--might actually motivate me to shop there (we normally avoid that Whole Foods because it's really hard to park and you know, it's Whole Foods).

One thing that will take some getting used to is not having to put a key into it in order to operate it. I kept reflexively reaching toward the steering column to remove the key that wasn't there.

Here's a question for you Electrical Engineers out there: what is the equivalent to miles per gallon for an electric vehicle? Is it miles per megajoule? miles per amp-hour?

I'm trying to remember what the unit of potential electrical energy is and coming up blank (not sure I ever really knew).

Oh, and since we have a PV system on the house and can control when charging takes place, I am going to claim that this Volt is a solar powered vehicle.

Labels:

Wednesday, September 01, 2010

Norm Reconsiders DITA Specialization

Norm Walsh has published a very interesting post to his blog, Reconsidering specialization, part the first.

This is very significant and I eagerly await Norm's thoughts.

As Norm relates in his post, he and I had what I thought was a very productive discussion about specialization and what it could mean in a DocBook context. I think Norm characterized my position accurately, namely that the essential difference between DocBook and DITA is specialization and that makes DITA better.

Here by "better" I mean "better value for the type of applications to which DITA and DocBook are applied". It's a better value because:

1. Specialization enables blind interchange, which I think is very important, if not of utmost importance, even if that interchange is only with your future self.

2. Specialization lowers the cost of implementing new markup vocabularies (that is, custom markup for a specific use community) roughly an order of magnitude easier.

There's more to it than that, of course, but that's the key bits.

All the other aspects of DITA that people see as distinguishing: modularity, maps, conref, etc., could all be replicated in DocBook.

If we assume that DITA's more sophisticated features like maps and keyref and so forth are no more complicated than they need to be to meet requirements, then the best that DocBook could do is implement the exact equivalent of those features, which is fine. So to that degree, DocBook and DITA are (or could be) functionally equivalent in terms of specific markup features. (But note that any statement to the effect that "DITA's features are too complicated" reflects a lack of understanding of the requirements are that DITA satisfies--I can assure you that there is no aspect of DITA that is not used and depended on by at least one significant user community. That is, any attempt, for example, to add a map-like facility to DocBook that does not reflect all the functional aspects of DITA maps will simply fail to satisfy the requirements of a significant set of potential users.)

But note that currently DocBook and DITA are *not* functionally equivalent: DocBook lacks a number of important features needed to support modularity and reuse. But I don't consider that important. What really matters is specialization.

Note also that I'm not necessarily suggesting that DocBook adapt the DITA specialization mechanism exactly as it's formulated in DITA. I'm suggesting that DocBook needs the functional equivalent of DITA's specialization facility.

Note also that DocBook as currently formulated at a content model level probably cannot be made to satisfy the constraints specialization requires in terms of consistency of structural patterns along a specialization hierarchy and probably lacks a number of content model options that you'd want to have in order to support reasonable specializations from a given base.

But those are design problems that could be fixed in a DocBook V6 or something if it was important or useful to do so.

Finally, note that in DITA 2.0 there is the expectation that the specialization facility will be reengineered from scratch. That would be the ideal opportunity to work jointly to develop a specialization mechanism that satisfied requirements beyond those specifically brought by DITA. In particular, any new mechanism needs to play well with namespaces, which the current DITA mechanism does not (but note that it was designed before namespaces were standardized).

Monday, August 09, 2010

Worse is Better, or Is It?

At the just-concluded Balisage conference, Michael Sperberg-McQueen brought up the (apparently) famous "worse is better" essay by Richard P. Gabriel (Wikipedia entry here, original paper here). I had never heard of this (or at least had no memory of ever hearing of it) even though it is directly relevant to my experiences as a standard developer and engineer, where I've done things in both the "MIT" way (correctness is most important) and, more or less, the "New Jersey" way (simplicity is most important). I was actually very surprised that nobody had ever pointed me to it before.

Gabriel's original argument is essentially that software that chooses simplicity over correctness and completeness has better survivability for a number of reasons, and cites as a prime example Unix and C, which spread precisely because they were simple (and thus easy to port) in spite of being neither complete functionally nor consistent in terms of their interfaces (user or programming). Gabriel then goes on, over the years, to argue against his own original assertion that worse is better and essentially falls into a state of oscillation between "yes it is" and "no it isn't" (see his history of his thought here).

The concept of "worse is better" certainly resonated with me because I have, for most of my career, fought against it at every turn, insisting on correctness and completeness as the primary concerns. This is in some part because of my work in standards, where correctness is of course important, and in part because I'm inherently an idealist by inclination, and in part because I grew up in IBM in the 80's when a company like IBM could still afford the time and cost of correctness over simplicity (or thought it could).

XML largely broke me of that. I was very humbled by XML and the general "80% is good enough" approach of the W3C and the Web in general. It took me a long time to get over my anger at the fact that they were right because I didn't want to live in that world, a world where <a href/> was the height of hyperlinking sophistication.

I got over it.

Around 1999 I started working as part of a pure Extreme Programming team implementing a content management system based on a simple but powerful abstract model (the SnapCM model I've posted about here in the past) and implemented using iterative, requirements-driven processes. We were very successful, in that we implemented exactly what we wanted to, in a timely fashion and with all the performance characteristics we needed, and without sacrificing any essential aspects of the design for the sake of simplicity of implementation or any other form of expediency.

That experience convinced me that agile methods, as typified by Extreme Programming, are very effective, if not the most effective engineering approach. But it also taught me the value of good abstract models, that they ensure consistency of purpose and implementation and allow you to have both simplicity of implementation and consistency of interface, that one need not be sacrificed for the other if you can do a bit of advanced planning (but not too much--that's another lesson of agile methods).

Thinking then about "worse is better" and Gabriel's inability to decide conclusively if it is actually better got me to thinking and the conclusion I came to is that the reason Gabriel can't decide is because both sides of his dichotomy are in fact wrong.

Extreme Programming says "start with the simplest thing that could possibly work" (italics mine). This is not the same as saying "simplicity trumps correctness", it just says "start simple". You then iterate until your tests pass. The tests reflect documented and verified user requirements.

The "worse is better" approach as defined by Gabriel is similar in that it also involves iteration but it largely ignores requirements. That is, in the New Jersey approach, "finished" is defined by the implementors with no obvious reference to any objective test of whether they are in fact finished.

At the same time, the MIT approach falls into the trap that agile methods are designed explicitly to avoid, namely overplanning and implementation of features that may never be used.

That is, it is easy, as an engineer or analyst who has thought deeply about a particular problem domain, to think of all the things that could be needed or useful and then design a system that will provide them, and then proceed to implement it. In this model, "done" is defined by "all aspects of the previously-specified design are implemented", again with no direct reference to actual validated requirements (except to the degree the designer asserts her authority that her analysis is correct). [The HyTime standard is an example of this approach to system design. I am proud of HyTime as an exercise in design that is mathematically complete and correct with respect to its problem domain. I am not proud of it as an example of survivable design. The fact that the existence of XML and the rise of the Web largely made HyTime irrelevant does not bother me particularly because I see now that it could never have survived. It was a dinosaur: well-adapted to its original environment, large and powerful and completely ill adapted to a rapidly changing environment. I learned and moved on. I am gratified only to the degree that no new hyperlinking standard, with the possible exception of DITA 1.2+, has come anywhere close to providing the needed level of standardization of hyperlinking that HyTime provided. It's a hard problem, one where the minimum level of simplicity needed to satisfy base requirements is still dauntingly challenging.]

Thus both the MIT and New Jersey approaches ultimately fail because they are not directly requirements driven in the way that agile methods are and must be.

Or put another way, the MIT approach reflects the failure of overplanning and the New Jersey approach reflects the failure of underplanning.

Agile methods, as typified by Extreme Programming, attempt to solve the problem by doing just the right amount of planning, and no more, and that planning is primarily a function of requirements gathering and validation in the support of iteration.

To that degree, agile engineering is much closer to the worse is better approach, in that it necessarily prefers simplicity over completeness and it tends, by its start-small-and-iterate approach, to produce smaller solutions faster than a planning-heavy approach will.

Because of the way projects tend to go, where budgets get exhausted or users get bogged down in just getting the usual stuff done or technology or the business changes in the meantime, it often happens that more sophisticated or future-looking requirements never get implemented because the project simply never gets that far. This has the effect of making agile projects look, after the fact, very much like worse-is-better projects simply because informed observers can see obvious features that haven't been implemented. Without knowing the project history you can't tell if the feature holes are there because the implementors refused to implement them on the grounds of preserving simplicity or because they simply fell off the bottom of the last iteration plan.

Whether an agile project ends with a greater degree of consistency in interface is entirely a function of engineering quality but it is at least the case that agile projects need not sacrifice consistency as long as the appropriate amount of planning was done, and in particular, a solid, universally-understood data or system model was defined as part of the initial implementation activity.

At the time Unix was implemented the practice of software and data modeling was still nascent at best and implementation was hard enough. Today we have deep established practice of software models, we have well-established design patterns, we have useful tools for capturing and publishing designs, so there is no excuse for not having one for any non-trivial project.

To that degree, I would hope that the "worse is more" engineering practice typified by Unix and C is a thing of the past. We now have enough counterexamples of good design with simplest-possible implementation and very consistent interfaces (Python, Groovy, Java, XSLT, and XQuery all come to mind, although I'm sure there are many many more).

But Michael's purpose in presenting worse-is-better was primarily as it relates to standards and I think the point is still well taken--standards have value only to the degree they are adopted, that is to the degree they survive in the Darwinian sense. Worse is more definitely tells us that simplicity is a powerful survival characteristic--we saw that with XML relative to SGML and with XSLT relative to DSSSL. Of course, it is not the only survival characteristic and is not sufficient, by itself, to ensure survival. But it's a very important one.

As somebody involved in the DITA standard development, I certainly take it to heart.

My thanks to Michael for helping me to think again about the value of simplicity.

Thursday, July 22, 2010

At Least I Can Walk To Work, Part 2

OK, I may have overreacted before. I was really pretty depressed there for a couple of days. I even started seriously considering buying a gun to have in the house "just in case". I think I've calmed down a bit. For one thing, I couldn't stay in that state without going literally mad.

As a counter to the depressing doomsaying of James Kunstler I found Peak Oil Debunked, which seems to be a reasonably thoughtful counter to the most extreme of the Peak Oil predictions. It cheered me up a bit, although I found a number of the arguments therein to be not entirely convincing or accurate-but-missing-the-point, especially as regards agriculture. Peak Oil Debunked (POD) isn't actually debunking the notion of peak, just trying to counter the most extreme doomsday prognosticating, which is good. POD isn't saying there is no peak, just that the results of it can't be as extreme as Kunstler and other Peak Oil doomsayers are predicting.

But it's hard to see how a serious contraction of resources, especially food (and by extension, capital for investment), isn't inevitable in the relatively short term. That can't play out well. Our current recession has to be a taste of what's to come--there just isn't going to be the level of energy input needed to pull us out the way WWII did for the Great Depression, so even if the contraction is slow it will still be a contraction and that will be hard on everyone who is even a little overextended or otherwise dependent on continued growth, which may be all of us, even those of us who have eliminated our debt and have a little cash set aside.

We're starting to see practical electric vehicles coming on line, but does that help if it keeps us from replacing the suburbs with more dense towns and villages? If we aren't building trains at the same time we're building wind farms, I fear we're missing the point. Why can't I take a 300kph train from Austin to Houston or Austin to Dallas?

On the other hand, it's going to be a long time before we can electrify air travel, barring some unexpected miracle in electricity storage density and planes can't run on coal, so I think we're not far away from seeing air travel severely curtailed. The answer to my train question is "because I can fly Austin to Dallas on SWA for $100.00"--what motivation does any private enterprise have in building that train and what motivation does the State of Texas have, given it's millions in the hole just now? None. But I think that economic picture has to change soon (within the next decade).

But in many ways Kunstler's arguments are about financial chaos as a side effect of inevitable contraction and that seems much scarier and more likely than simply running out of fuel for cars. We're already in a situation where credit is hard to get. It won't matter how many electric cars GM or Toyota builds if nobody can get a loan to buy them.

So I don't know. It seems likely that market forces will tend to reduce consumption as prices increase, mitigating at least the immediate effects of fall-off in supplies. As the Peak Oil Debunked blog points out, there is a lot of room for conservation in the U.S. For myself, I could almost entirely eliminate the use of my car for things like getting groceries, the pharmacy, going out to eat, as long as I was willing to eat the time cost. We live within walking distance of all the essential services we need. But Austin, like most U.S. cities, doesn't provide the level of public transport needed to make more far-flung trips convenient, must less pleasant. I could shave a couple of kWhs a day from my electricity use if I really had to. But my house is already very energy efficient so it would be about using less A/C and putting all the wall warts on switches and that sort of thing, stuff for which we currently have no economic incentive in the face of the inconvenience and discomfort. I'd rather just invest in more solar panels as long as I have the cash to do so.

Let's say oil supplies tighten significantly over the next two years and bus ridership goes up--Austin's transit agency is already in serious budget trouble and would have a hard time reacting to a surge in ridership (as they did in 2008). While we finally have a (largely pointless) light rail system, it would take a remarkable effort to put in a more comprehensive trolly or Portland-style light rail system in less than 5 years given public demand for it.

[I say our light rail system is pointless because it essentially serves one suburb of Austin, making it convenient for people who work downtown and live in Cedar Park to get to work. The train doesn't go anywhere else interesting and doesn't usefully serve anyone south of down town. Ridership has been a fraction of projections and of capacity. Small surprise. Now if there was a train that went from downtown to the airport and that came south to at least Ben White on Congress that would help a lot. I know of no plans along those lines.]

So is there anything actionable out of this new-found appreciation for peak oil and the inevitable contraction in our economy and life styles? Thanks to a recent inheritance I have some cash available. Should I buy a Volt? Expand my PV system? Buy gold? Put by a year's worth of rice and canned goods? Buy Treasury bonds? Fill my garage with machine tools? Buy a shotgun?

For now I think I'm going to take the following actions:

1. Put my name down on the list for the Chevy Volt. Austin is one of four launch cities.
2. Travel with my family as much as we can before air travel becomes a thing of the past for all but the richest humans.
3. Put in the rain water cistern we had to cut from our original house construction project (the rain capture plumbing is in place).
4. Think seriously about expanding the PV system, although I'm hesitant to do so too quickly as new PV technology is developing rapidly.
5. Avoid taking on any new debt--we are currently free of consumer debt and I'd like to keep it that way.
6. Continue to reduce our expectations, as a family, of what "enough" means, and try to teach my daughter that things are not what life is about.

I'm going to keep tracking opinion and bloviation in the Peak Oil space--it's entertaining if nothing else.

More as it develops....

Monday, July 19, 2010

Good Thing I Can Walk to Work

Given some of my colleagues in the XML community, I feel like I might be coming a little late to this party, but I just read James Howard Kunstler's The Long Emergency, a cogent and calm analysis of peak oil and an exploration of what that is likely to mean for the world as a whole and the U.S. in particular.

Kunstler's point is basically this: 100 years of cheap oil have allowed us to create a society of artificially inflated wealth, allowing us to overpopulate the Earth far beyond its normal carrying capacity (literally "eating oil" in the form of crops grown with artificial fertilizers, pesticides, and irrigated using water pumped by cheap oil), live in unsustainable suburbs and skyscrapers, and generally live far beyond our means. And this period of cheap oil is about to (and now, 6 years after the book was published, has) start ending, as we pass the "peak oil" point, the point at which the world supply of oil steadily decreases.

His prediction is that the decrease in supply will inevitably lead to a number of very serious problems, of which the most dire will simply be a lack of food--without cheap oil to irrigate, fertilize, and transport food people will starve. Lots of people. He makes the point that the world population at the start of the industrial revolution was about 1 billion people, which we can take to be the maximum solar carrying capacity of the earth. We're roughly 6 times that now. Not good. This can only lead to resource conflicts of the most serious kind.

Thus we are entering a time of contraction on all fronts: food supplies, energy for transportation, fuel for heating and industry, feedstocks for chemicals and plastics, etc. The reduction in real wealth will make it harder to maintain our current infrastructure and even build those things that might replace some of the lost oil (how do you raise the money to build wind farms or solar panel fabs when the growth-based economy has collapsed and the cost of everything is rising?).

In short, the 100-year period of constant growth in wealth is over, never to return. The disruptions will be severe and hard to predict.

All of this will be exacerbated by global warming (itself caused by coal and oil), which will further disrupt agriculture not to mention flooding coastal cities and, in the worst case, shutting down the gulf stream and freezing Europe.

Kunstler's arguments seem to be well grounded in fact and a clear historical context. Many of his facts were consistent with what I've read in other contexts. I didn't check his primary sources but he at least documented his key facts. He's not hysterical and does not appear to have any particular ideological axe to grind, other than clearly viewing all politicians as spineless and useless.

One part of the book that I found particularly striking was his remarkably accurate prediction of the financial meltdown that occurred in 2008.

He also makes the point that no alternative energy source is going to do much to help, certainly not in the short term (meaning the next few decades) for the simple reason that, even if we could generate all the electricity we needed with solar, wind, and nuclear, we can't replace our oil-based transportation system with an electricity-based one. Not to mention it's unlikely the U.S. would be willing or able to build enough nuclear plants fast enough. Hydrogen is patently nonsense and ethanol is a scam. If start rebuilding the train network now, we might avoid the worst, but we're not seeing any political movement in that direction (although Warren Buffet's investment in Union Pacific Santa Fe seems much more shrewd than it may have seemed at the time).

If he's anywhere close to right (and I think he is, or I wouldn't be mentioning it here), then we're in for some pretty serious disruptions in all aspects of society.

For information technology, one question becomes: where does the electricity come from to power the servers that we now depend on for our Google and Amazon and Bing? Not to mention the manufacturing facilities to build the computers themselves. Chip fabs cannot be scaled down to cottage industries.

I find it interesting that the trend in large-scale server farms has been to cite them in the Pacific Northwest where hydro power is plentiful. That could be to our advantage. Will the economy remain sufficiently intact to even have a need for the sort of abstract computing infrastructure we build and maintain? Will soaring transportation costs make computing that much more important?

I just returned from a week-long trip to Oxford, UK, which I found to be precisely the sort of small city Kunstler says we'll all have to live in before too long: dense, walkable, with good public transport, not too vertical. I walked and took the bus to get from my hotel to my client about 2 miles away. I ate in neighborhood restaurants, all quite good, no more than a few minutes' walk or a short bus ride away. Several of my colleagues did not own cars and several biked to work every day. The cars on the streets were significantly smaller, on average, than I'm used to here in the States.

When I got home to Austin I was struck by the almost cartoonish hugeness of the vehicles in the airport parking garage--they seemed to be almost exclusively the hugest SUV models made. Even when we drive our 15-year-old Explorer, which was the largest SUV Ford made at the time we bought it, we often lose it behind bigger monsters parked around it. Our Toyota, which would have been a large car in Oxford, was really hidden.

How many more overseas trips can I expect to take in my job? I didn't do anything that couldn't have been done well enough remotely, although being there physically made things more efficient and had significant social benefit.

Another question that immediately came to my mind is what can I do to really prepare for the sort of almost unthinkable change that Kunstler says is coming? I've already done a good bit by moving closer to the city center, building an energy-efficient house that supplies some of it's own electricity (and could supply more given an investment in more PV panels or a small wind generator). We raise chickens and grow a few vegetables and could grow a good bit more. There's land in the neighborhood on which a community garden could be based (could probably feed a good bit of the neighborhood with the open (and largely unused) areas on elementary school grounds just a couple of blocks away).

But what about natural gas? Kunstler points out something I hadn't known, which is that when natural gas wells tap out, they do so very quickly and without warning. And he claims most of our gas fields are already starting to play out. There's not much you can replace natural gas with. Could I build a digester and produce enough methane from chicken poop? Where would I get the grain to feed the chickens to make the poop? Right now we get organic grain from a mill here in Texas, but before that mill opened, we got it from a mill somewhere quite far away (Kansas? Pennsylvania? I don't remember now). Seems unlikely I could produce enough to keep my on-demand water heater running, much less run the stove or the furnace.

Here in Austin we're reasonably well situated--we have relatively warm winters, good solar exposure, water from lakes (not the Ogalalla aquifer), decent agricultural land, and less sprawl than most other cities in Texas. But would that be enough? I don't know. Austin was the edge of the frontier when it was first settled in the 1830s. It could be again.

It would be hard for us to survive full summer heat without air conditioning, even with the passive solar aspects of our house, although our PV system was designed in part to cover the daytime load of the A/C system, so maybe we would be OK there. With an electric car we could get around as long as there was sun to charge it. And we can walk or bike to most of what we need. (Should I get on the list for a Volt--Austin will be a launch city. I'm seriously thinking about it since I do now have the cash to buy one.)

In any case, the book is a real eye-opener.

If there is any silver lining to all this it seems pretty clear that a lot of current issues like over-dependence on corn-based food, globalization's homogenizing and dislocating effects, over-sedentary, over-fed first-worlders, will go away pretty quick. But I can't seen those changes being necessarily pleasant or welcomed by the majority. I'm glad I'm handy with tools and subscribe to Make. I'm thinking that maybe having a milling machine and a metal lathe in the garage might be good moves.

My father grew up on a pear and apple ranch in the Hood River valley of Oregon. We visited there recently and I got to talk to the woman who now runs the ranch (the granddaughter of the man who founded the ranch back in the early 20's, just as oil-based agriculture was coming into its own). We talked about globalization of agriculture and the local food movement and such.

She pointed out that their ranch, which is small by Hood River valley standards, produces as many pears as the state of Oregon consumes in a year. Most of the pears they grow are sold overseas. If they didn't have access to those markets, where would they sell? It's hard to see that ranch (or the Hood River Valley or the Yakama Valley) being viable in their current forms for much longer.

So enjoy those pears and $3.00 a pound Yakama cherries while you can. I know I am.

Sunday, April 04, 2010

My Precious: iPad Day 1

I bought an iPad at 9:30 am CDT 3 April 2010. I had to.

My nominal justification was to see if it would work for my dad. It absolutely will.

But really I just couldn't not have one.

Here are my initial impressions:

-typing is remarkably efficient. I am writing this on the iPad sitting in a comfy chair, pad in my lap (put it on a pillow after a while so cord would reach--battery finally running down). I am a fast touch typist and I can sort of touch type but really it's just fast hunt and peck. But I don't feel like it's slowing me down. I do miss arrow keys--finding navigating around in a multiline edit field a bit tedious.

- the response is very fast. Web pages load fast, apps load fast. Not like an iPhone at all.

- web browsing has full computer feel. Have not yet gone to site that didn't seem to work in safari (flash sites excepted of course)

- so far everything has just worked, which is a lot of the point of all apple products

- the only potential issue so far has been the volume indicator wouldn't go away watching some YouTube videos but it hasn't recurred

- everyone who sees it wants one. Badly.

- netflix app worked well. Punched up Willy Wonka and it played very nicely

- The Elements interactive book is a pretty amazing demonstration of what the device can mean for instruction and reference.

- I downloaded all the free newspaper apps I could find and they all provided a very satisfying reading experience. One of e things i was looking for was that Dave-Bowman-reading-the-paper-on-his-tablet-over-breakfast experience and i think we have it. Will definitely consider a NYT subscription--we get the Sunday times and usually buy the Tuesday edition. So $4.00 a month for full access would be reasonable.

- iBooks seemed to work pretty well although I'd really like to be able to add my own epub books to it. Not sure if there's a way to do that in iTunes.

- upgraded to plants vs zombies HD and have been having a hard time dragging the device away from my daughter (age 6). She also likes the drawing apps.

- mail working pretty well, but not that different from iPhone experience except for more screen space and easier typing

- battery life seems as advertised ran all day on a charge including video, lots of PvZ playing, and weak wifi signals

It definitely meets the pick it up and carry it everywhere requirement, which raises several practical issues:

- will I ever be able to put it down? There's a serious danger of always having it to hand which means always reading something or playing a game or whatever.

- where will I set it down? We have concrete floors so you want to set it in a relatively safe place, of which we have few

- how do you keep it from being stolen?

So far I can say without reservations that it has exceeded my fairly high expectations after a day of use.

Cory Doctorow has made an eloquent and principled argument against the iPad as being a closed system that is counter to the basic concept of freedom and access the Internet represents. I agree with Cory in principal. I have spent my entire career championing standards specifically because they protect against proprietary control and lock in. Yet I have a MacBook and an iPhone and now an iPad and would not part with them. Why? Because they fricken work. They are solid and beautiful and reliable. Even though Cory is right it doesn't matter because there are very few of us who can trade reliability for openness.

If Google can build a software and hardware platform comparable to the iPad then I'm there. But so far not even Microsoft much less the open source world has succeeded in building a device (since WebTV) that I would put in father's hands. Even my mother, who is quite computer savvy, has just traded in her dell for a Mac.

At the same time, the content standards my clients depend on are all well supported: epub for ebooks, HTML for web delivery, PDF for page fidelity. Lack of flash is an annoyance but not a deal killer since nobody should be depending on flash exclusively anyway.

There is the question of whether the App Store as Apple manages it is draconian or a necessary evil in order to have a system safe for unsupervised use by children. I'm sure I can form a useful opinion without a lot more thought. I'm not one for censoring children's access to information in general once they are old enough to understand what they might be finding, but 6 is not yet that age.

Labels:

Saturday, January 09, 2010

Need a WebTV Replacement

Some years ago now I set my father up with WebTV. It met his needs perfectly: it gave him email and Web access from his TV (he spends most of his time in front of his TV), it was reliable, it didn't require him to learn how to use a computer generally, and it didn't require any support from me (my father is in Tacoma, Washington and I'm in Austin, Texas, so I can't just pop over to provide hands-on support).

My father is not tech savy--he used a manual typewriter to produce a club newsletter for years until the club finally forced him to upgrade to an electric typewriter. He refuses to carry a mobile phone or use ATMs. You get the idea. However, he depends on email and e-bay so he has to have some sort of Internet access.

Unfortunately, while Microsoft has not completely abandoned WebTV, they have not enhanced it in years and clearly have no intention of doing so--you can't even download the emulator they used to provide.

The problem for my father is that WebTV is simply no longer up to the task of supporting modern Web sites and it's becoming harder and harder for him to use e-bay and other Web sites, like Amazon or Flicker. And forget about Facebook.

My quandry is what to replace WebTV with. So far I haven't been able to identify any obvious good solutions. The Wii's Web browser is close but it's still pretty clunky--even with a keyboard I don't think it would be reliable or simple enough for my dad--it requires a lot of wimote fiddling to scroll and pan around Web sites that don't fit nicely on a screen.

AppleTV would seem likely except that it doesn't come out of the box with a Web browser and I'm not going to support a hack that adds one.

A Mac mini might serve, but that gets us into the having a full computer problem, and I'm not sure my dad's TV takes HDMI input (I need to find out about that).

It seems like the new tablets that are all the buzz of the gadget world might serve, especially the rumored Apple tablet, but I'm not sure my dad would be willing to drop a grand on it, and I'm not keen to have him be an early adopter.

But I feel like I'm missing some obvious technology choice. Anyone out there have any thoughts about how to provide a TV-connected Web browser that is easy to use, works reliably, and will work with modern Web sites?

PDF2 Transform Now Enabled for Plugin-Based Extension

In the latest 1.5 Toolkit distributions, the PDF2 transform has been enabled for plugin-based extension. This means that you can use normal plugin techniques to provide extensions to the PDF processing that support specializations or global overrides, rather than customizations for specific publication sets or book designs.

As originally implemented, the PDF2 processor could only be extended through its unique Customization facility, whereby you either add things to its built-in Customization directory or create copies of that directory and then specify where the Customization directory is as a parameter to the transform. This is appropriate for customizations that are not global, that is, they are specific to particular publications, sets of publications, products, or whatever.

It is not appropriate, however, for providing general extensions, such as support for new domains where the domain-specific processing would normally be the same in all outputs or where the base processing is the same but can be customized using the normal PDF2 customization facilities.

In the latest DITA 1.5 Toolkit, you can now have both plugin-provided extensions as well as Customization-based extensions. This makes it easy to provide generic PDF2 support for specializations or provide global overrides for existing topic and map types.

A PDF2-extending plugin can provide only overrides, or only a Customization directory or both.

For example, for DITA for Publishers, I've started implementing support for the Publication Map (pubmap) map domain, which is similar to bookmap but tailored for Publishers. To support the PDF2 transform, I've created a plugin that provides both general extensions and a base Customization directory that can be used as a basis for local customizations.

The directory structure of the plugin is:

net.dita4publishers.pubmap.fo/
Customization/
xsl/
plugin.xml

Where the Customization/ directory follows the rules and conventions for the PDF2 Customization directories and xsl/ holds the plugin-provided XSLTs that extend the base PDF2 processing.

The plugin.xml file looks like this:

<plugin id="net.sourceforge.dita4publishers.pubmap.fo">
<require plugin="net.sourceforge.dita4publishers.formatting-d.fo"/>
<require plugin="net.sourceforge.dita4publishers.pubContent-d.fo"/>
<require plugin="net.sourceforge.dita4publishers.xml-d.fo"/>
<feature extension="dita.xsl.xslfo"
value="xsl/pubmap2xslfo.xsl" type="file"/>
</plugin>

The elements are indicating dependencies on other PDF2-extending plugins for the different domains that DITA For Publishers provides.

The line is what integrates the XSLTs into the main PDF2 XSLT transforms and it works just as for the HTML plugins, namely, the integrator.xml Ant tasks adds an xsl:include of the plugin-provided XSLT module into the main PDF2 transform shell XSLT.

One thing that plugin-provided PDF2 transforms can do is define additional customization points: named attribute sets, named variables, and new XSLT modes, which can then be customized using the normal PDF2 customization mechanisms.

In the case of the pubmap extensions, I've extended the XSLT so that publication maps produce the same output as bookmaps (that is, a pubmap-d/chapter topicref goes through the same base processing as a bookmap/chapter topicref) and added support for DITA for Publishers-specific topic types, in particular, sidebar, which gets a box around it by default (XSL-FO 1.1 can't render multi-page floats, which would be the ideal way to render sidebars).

This enhancement to the PDF2 processor, along with the many other improvements made by the Suite Solutions team, makes it much easier to extend and customize the processor and, in particular, support new domains and topic types. The Customization process is as it was, but now you only need to use XSLT in your customization when you need truly customization-specific processing (for example, generating a publication-specific title page or copyright page).

Saturday, May 16, 2009

Why DITA Requires Topic IDs (And Why Their Values Don't Matter)

The DITA standard requires all topics to have an id= attribute.

Why?

The reason is simple: so you can point to elements within topics. And for no other reason.

Surprised?

Most people seem to assume that topics are required to have IDs so you can point to the topics. And they further seem to assume that topic IDs need to be unique within some fairly wide scope (e.g., within their local topic repository).

But that's not the case at all.

For the case of topics that are the root elements of their containing XML documents and that contain no elements that themselves have IDs, the topic ID isn't needed at all. In this case the topic can be unambiguously addressed by the location of the containing XML document (e.g., "mytopic.xml").

In the case of topics that are not root elements and that are not themselves pointed to and that do not contain any elements with IDs, again the ID is not needed (because nothing points at the topic or its elements).

So why does the DITA standard require topics to have IDs?

It is because topics establish the addressing scope for their direct-descendant non-topic elements.

By the DITA spec, to point to an element that is not a topic you use a two-part pointer: {topicid}/{elementid}.

Without a topic ID it would be impossible to point to a non-topic element.

By requiring all topics to have some ID, it ensures that any non-topic elements with IDs are immediately addressable without the need to also add an ID to their containing topic.

In general by normal DITA practice, non-topic elements are given IDs only when they are intended to be either used by conref or be the target of a cross reference. Both of these tend to be carefully considered decisions driven by editorial and business rules, not arbitrary author decision. Which means you would tend to know, in advance of creation of a given element, that it is a candidate for conref use or xref use, which means you know to give it an ID at the time you create it.

By requiring that topics always have IDs it means that authors don't have to worry about adding IDs to topics just because they also happened to put an ID on an element. [In normal XML practice, elements are addressed directly by ID within their containing document, which means it is sufficient to simply put an ID on the element with no other dependencies. That is not the case in DITA, which defines its own unique syntax for non-topic element addressing.]

Because topic IDs are XML IDs (as opposed to non-topic-element IDs, which are just name tokens and have no special XML-defined rules), any XML editor will both require topics to have IDs and ensure that topic IDs are unique within the scope of their containing document.

If topic IDs were not required, DITA-aware editors would have to have special rules to know to require topic IDs whenever non-topic elements got IDs and it would mean that generic XML editors would not ensure that this important DITA rule was met (topics with elements with IDs must themselves have IDs).

So the DITA spec requires that all topics have IDs.

But the fact that topics must have IDs does not imply that topic IDs need to be either descriptive or unique within any scope wider than the XML documents that contain them.

In the case where every topic is the root of its own document, the topic ID can be the same for every topic. To make this point I have standard practice of using the value "topicid" for the IDs of all my root topics. There is absolutely no need to generate unique topic IDs for document-root topics as a matter of standard practice.

The only other case is ditabase documents.

If you are using ditabase documents, stop.

Sorry.

There are some legitimate uses of ditabase documents, for example, as a first-pass target for data conversions and as a way to hold otherwise unrelated topics that need to be managed as a single unit of storage, such as topics that exist only to hold reusable elements.

[NOTE: Using ditabase simply to allow the mixing of different topic types in a single document during authoring* is the wrong thing to do. You should have already created local shell DTDs and within those shells you can allow whatever topic type mixing is appropriate for your local environment. There is no need to use ditabase in that case and many reasons not to. See my many other posts about why you should always create local shell DTDs as the first step in setting up a production use of DITA.]

In that case, the topic IDs must be unique within the scope of the ditabase element, simply because XML rules demand it. But the IDs need not be unique beyond that scope and they need not be meaningful.

One of the implications of this is that if you always edit topics as individual documents and never have nested topics you never have to think about topic IDs. Your topic document template should already have an ID value and it can be something like "topicid" and there is no reason whatsoever for that ID to ever be changed.

In the case where you do edit topics with nested topics (for example, you're authoring more or less narrative documents or you've designed some topics types that need nested topics to allow a bit of hierarchy where the nested topics would never be meaningful in isolation) then you either have to configure your editor to assign IDs to the nested topics for you (if your document template doesn't already have the subtopics with IDs assigned) or you have to think about it. But even in that case, the IDs can be pretty generic, e.g. "st1", "st2", etc. The IDs in that case still don't need to be unique beyond the scope of the containing document.

Labels: