Subscribe to Dr. Macro's XML Rants

NOTE TO TOOL OWNERS: In this blog I will occasionally make statements about products that you will take exception to. My intent is to always be factual and accurate. If I have made a statement that you consider to be incorrect or innaccurate, please bring it to my attention and, once I have verified my error, I will post the appropriate correction.

And before you get too exercised, please read the post, date 9 Feb 2006, titled "All Tools Suck".

Wednesday, May 18, 2016

Delivering HTML from DITA in The Face of Reuse

Delivering HTML from DITA in The Face of Reuse of Topics in a Single Publication

In DITA you can use the same topic multiple times from the same map. For example, the same user interface component might be used by several different parts of a program and you want to include that topic in the descriptions for each of those parts. Or you might have a common installation topic that uses conditional content to apply to different operating systems.
In monolithic output formats like PDF and EPUB this reuse does not present any particular practical problems: because the rendered publication is a single linear flow of content, each use simply occurs in the flow and reflects whatever conditions are in effect at that point in the publication.
However, with multi-file output formats like HTML, there are several practical problems.
The most obvious problem is the "one result file or multiple result files?" question: When a topic is used multiple times, do you want to have just one result HTML file or do you want one result HTML for each use? The DITA Open Toolkit, through version 1.8.5, only generates a single result HTML file unless the map author specifies @copy-to on the topicrefs to the topic. (The @copy-to attribute specifies the effective source path and filename for the referenced topic so that the processor then treats that use of the topic as though it was a copy of the real topic with the specified filename.)
The "every page is page one" philosophy says you should have just one result HTML file for a given topic. Likewise, searching is usually more effective if there is just one HTML file, otherwise you end up getting multiple search results for the same content, which confuses users and makes it hard to know which version to use (and may throw off search ranking algorithms that take the number of copies of a file into account in some way).
On the other hand, if a user comes to a topic that is used in multiple places in the publication, how do they know which use they care about in their current access session?
For the re-used installation topic example, if it reflects multiple operating systems and there is only one copy, you would appear to be required to show all the operating system versions and use flagging to distinguish them. On the other hand, if you have one HTML file for each copy of the topic, each HTML file only reflecting a single operating system, a search on installation will find all the copies, making it hard for the user to choose the right one.
DITA 1.3 adds an important new feature, key scopes, which allows keys to have different values in different parts of the same map. This lets you reuse the same topic in different contexts and have content references, hyperlinks, and key-defined text strings resolve to different values in the different use contexts.
For the installation example, you could have three key scopes, one for each of the operating systems Windows, OSX, and Linux.
DITA 1.3 also adds the new branch filtering feature. With branch filtering you can apply different filtering conditions to branches within a single map. This lets you use the same topic in different parts of the map with different filtering conditions applied.
For the installation topic you can now have a single topic as authored with content that is conditional to each operating system and then have only the operating system for the branch filtered in.
It should be obvious that this must result in either three result HTML files, one reflecting each different set of filtering conditions, or a single HTML file constructed so that the browser can do the filtering dynamically, such as through different CSS files for the different filtering conditions or through Javascript or some combination.
This all means that, with DITA 1.3, DITA-to-HTML processors must handle multiple uses of the same topic in a sophisticated way. The OT 1.x approach of generating a single HTML result will not work. Likewise, the OT 2.x approach of always generating a new result file works (in that it ensures a correct result) but does not necessarily satisfy requirements for minimizing content duplication in the result.
So basically there is a fundamental conflict between ensuring correct content in the generated HTML when branch filtering and key scopes are in effect and satisfying the "every page is page one" philosophy.
If every use of a topic results in a new HTML file then searching is impaired but HTML generation is as simple as it can be. 
In the context of the Open Toolkit, branch filtering (and @copy-to) is applied to create new intermediate topic files and then those intermediate topics are filtered to produce another set of intermediate topics which are then the input to the normal HTML generation process. All the data processing complexity is in the preprocessing.
In order to produce a single result HTML file the processor has to determine, for the conditional content in a given topic, which content would be filtered out of all uses and which content would be filtered in for any use context and produce an intermediate topic that omits the globally-excluded elements but retains the elements included in any use. It also has to somehow record each use and how it relates to the included conditional elements so that the final HTML generation stage can retain that information in the generated HTML so that CSS or Javascript can act on it. For example, the processor might translate each unique set of filtering conditions into a single value included in the conditional element's @class values or it might embed some JSON data structure that establishes the map context the element was referenced in.
Given this kind of information in the generated HTML it would then be possible to have the browser dynamically show or hide specific elements based on the active conditions selected by the reader. By default the content could be flagged as it would be in the normal flagged output result produced by the normal Open Toolkit flagging preprocessing.
However, with this dynamically-filtered HTML file there's still the problem of the reader knowing what use context they want to view the topic in terms of.
For example, if you do a search and find the installation HTML page and open it you then have to decide which operating system you want to view it in terms of. 
How is this decision presented to the reader? 
How does the Web site track access in order to establish this use context automatically when it can?
And of course the situation could be much more complicated: there could be a number of conditions against which the content is filtered, e.g., operating system, hardware platform, region, product features active, etc.
I think this is a delivery challenge that the DITA community needs to address generally by both establishing best practices around content authoring and delivery, by implementing the DITA-to-HTML processing that supports generating these more-sophisticated HTML pages, and by implementing general CSS and Javascript libraries for use in DITA-based Web sites.

Labels: , , , , , ,

Tuesday, February 02, 2016

Rethinking DITA "Custom" Attributes

I had a bit of an epiphany today as regards the definition of "custom" attributes and their DITA conformance or lack thereof.

To date I think I have been consistent in saying "You can't define attributes for individual elements in a way that is a conforming DITA specialization."

But in discussing this today with some colleagues I realized I have been too limited in my thinking. In particular, I now think that @base is an appropriate specialization base for any "custom" attribute and can be combined with constraint modules to limit the appearance of those attributes to specific element types.

If the initial requirement is, for example, "I want the attribute @foo on table" then I think you can make this a conforming @base specialization as follows:

1. Declare a normal attribute domain module for your new attribute, specializing it from @base:

me_fooAttDomain.ent:

<!-- @foo attribute domain module: -->
<!ENTITY % me_fooAtt-d-attribute "foo CDATA #IMPLIED">

<!ENTITY me_fooAtt-d-att "a(base foo)" >
<!-- End of domain module -->

2. Define a constraint module that allows @foo on the elements you want it to be allowed on:

me_fooAttributeConstraint.mod:

<!-- @foo attribute constraint module: -->

<!ENTITY me_footAttTableOnly-constraints
"(topic me_fooAttTableOnly-c)"
>

<!ATTLIST table foo CDATA #IMPLIED >
<!-- End of constraint module. -->

3. In your shell, include the domain module but *do not* add it to the @base overrides (the not including it is a constraint and needs to be declared as such on the @domains attribute, which our separate constraint module will do for us):

<!ENTITY % me_fooAtt-d-dec
      PUBLIC "urn:pubid:example.com:dita:attributes:me_fooAttDomain.ent"
      "me_fooAttDomain.ent"
>%me_fooAtt-d-doc;
...
<!-- Constraint: Not including @foo in base attribute extensions: -->
<!ENTITY % base-attribute-extensions
""
>
4. In your shell, include the constraint module:

<!-- ============================================================= -->
<!-- DOMAIN CONSTRAINT INTEGRATION -->
<!-- ============================================================= -->

<!ENTITY % me_fooAttributeConstraint.def
    PUBLIC "urn:pubid:example.com:dita:constraints:me_fooAttributeConstraint.mod"
           "me_fooAttributeConstraint.mod"
>%me_fooAttributeConstraint.def;

5. In your shell, add the constraint domains contribution to the @domains attribute:

<!ENTITY included-domains
    "&concept-att;
     ...
     &me_footAttTableOnly-constraints;
    "
>

You've now declared a specialization of @base but then constrained it to only be allowed on <table>, as defined in the separate constraint module.

The 1.3 specification says this about the @base attribute:

"A generic attribute that has no specific purpose. It is intended to act as a base for specialized attributes that have a simple value syntax like the conditional processing attributes (one or more alphanumeric values separated by whitespace), but is not itself a filtering or flagging attribute. The attribute takes a space-delimited set of values. However, when acting as a container for generalized attributes, the content model will be more complex; see Attribute generalization <http://docs.oasis-open.org/dita/v1.2/os/spec/archSpec/attributegeneralize. html> for more details."
-- http://docs.oasis-open.org/dita/v1.2/os/spec/common/select-atts.html#select-atts

As long as your attribute's values will satisfy the requirement that "The attribute takes a space-delimited set of values." then your attributes will be conforming instances of @base.

An interesting implication of using @base is that normal generalization processing will convert @foo="bar" to base="foo(bar)". Which means you can also author @base attributes using that syntax (just as you can author @props using the same syntax, e.g. props="mycondition(myvalue)", which is equivalent to having a @props specialization named "@mycondition" with the value "myvalue"). In particular, this provides a standard way to interchange documents in terms of the OASIS-defined vocabulary without loss of information by generalizing specialized attributes to their @base and @props bases (which is part of the point of attribute specialization in the first place).

The result of all this is that you have an attribute this is a conforming specialization of @base and it's limited, via constraint, to only those places that you want to allow it. Your intent as a grammar designer is clear: I want this attribute to be limited to these element types.

If the constraint module isn't used (but the attribute domain is in the normal way), all your existing documents that have @foo on <table> will continue to be valid, which is one test for the correctness of a constraint. If they are generalized so @foo becomes base="foo(bar)" they will also continue to be valid.

So I reverse my earlier position on the appropriateness and conformance of element-type-specific attributes: the @base attribute combined with the constraint mechanism allows it without having to play any semantic tricks. Interchange is preserved through generalization, everyone is happy, and peace and prosperity reigns across the land.

Labels: , , ,

Tuesday, January 05, 2016

Some DITA and DocBook History: Common Origins, Very Different Results

The following was originally posted to the DITA Users' Yahoo Group 4 Jan 2016 in the context of a discussion of DITA vs. DocBook. My intent with this bit of history is to show how both DocBook and DITA (through it's ancestor, IBM ID Doc) started development around the same time, more or less from a single meeting. Those of us at IBM took things in one direction, those in the Unix-focused community went in a different direction.

The original post (edited for typos):

If you look at the history of DocBook and DITA both descend from the same time period, the late 80’s, when the technical communication industry in particular (but not exclusively) was trying to figure out how to apply this new SGML technology to their particular information management and document production challenges.

In the case of DocBook the genesis was primarily standardizing Unix man pages. In the case of DITA it was IBM’s attempt to standardize the markup used across the many different divisions and product groups within IBM as well as satisfy the requirements of online delivery of hyperlinked documents, something IBM was doing in the 80’s, long before anyone else outside of hypertext research groups, as far as I know.

There was a meeting the late 80’s, I think 1989, where representatives from the major software and hardware vendors met to discuss ways of standardizing the markup across their documentation, including IBM, HP, Digital Equipment, Group Bull, and one or more Unix vendors (the names escape me now—all except IBM and HP are long gone) in order to have some hope of interchange among them.

The meeting was hosted by Fred Dalrymple of the Open Software Foundation at offices in the Boston area. The work was led by Eve Maler, who was pioneering approaches to DTD design and modularization (she popularized the “pizza” model, adopted by the TEI and also reflected somewhat in DocBook and DITA). I was there with Wayne Wohler representing IBM. (Eve wrote the first book on SGML DTD design: "Developing SGML DTDs: From Text to Model to Markup”, with Jeanne El Andaloussi, who was at Group Bull at the time.)

One of the key things that Eve did was make a table that related the markup vocabularies of each participant to each other vocabulary. There was a row for “paragraph”, a row for “H1”, etc. [I’m sure I don’t have a copy of this table anywhere but it would be interesting to see it now—I have a clear picture of it in my mind but not clear enough to reproduce. But this table was, in many ways, the direct inspiration for my approach to markup design and set the direction of my technical career from then to now.]

What this table made clear was that all these languages had the same basic set of semantic elements but they all used different tag names and had different detailed rules for the content. But they all had some kind of paragraph (, , ,, etc.), headings, tables, lists, etc. (Remember that this was before HTML had been defined by Sir Berners-Lee—he based HTML off of the basic tag set in IBM’s GML Starter Set language, which predated SGML and was in use at Cern at the time Berners-Lee developed HTML.)

What Wayne and I got from this meeting was that A) there was this semantic correspondence and B) we needed a way to allow differences in markup details (tag names, content models) that still allowed interoperation. I realized that one could define a layered architecture with these base types as its foundation and, given a way to map specific element types to their bases, allow variety in the markup naming and content details while allowing interchange and common processing.

Soon after this Wayne and I, along with Don Day, Simcha Gralla, and others, started working on IBM’s SGML replacement for the GML-based BookMaster language, which was used for most of IBM’s documentation and had more than 600 element types, reflecting a very broad range of requirements. BookMaster allowed for very efficient creation of documentation delivered in print and online on 5 different computer platforms using IBM’s BookManager tool, which provided electronic books starting in the mid 80’s. But BookMaster was also big and difficult to change or extend. It suffered the same problems that all large all-encompassing vocabularies suffer: it became a tarball that was difficult to adapt to new requirements. IBM had a committee that considered BookMaster change requests and it worked on a 6-month cycle at best. BookMaster was also based on proprietary IBM composition technology, the Document Composition Facility, which was becoming obsolete with the development of PCs and more modern processing languages and systems.

At this same time Dr. Charles Goldfarb, inventor of GML and SGML, was now working on HyTime, an SGML-based language for hypertext representation. Dr. Goldfarb knew that he couldn’t impose a specific tag set but had to have a way to allow any element type to indicate what kind of HyTime thing it was. His solution was “architectural forms”, a mechanism that relied on specific SGML features to allow elements to declare how they related to the HyTime-defined element types and attributes. It also imposed basically the same content model constraints that DITA specialization imposes, namely that the content models of the derived element types had to be consistent with those of their architectural bases, but HyTime was necessarily less restrictive.

For the SGML BookMaster replacement, which we called IBM ID Document Type (IDDoc), we needed robust linking and we needed something like architectural forms. So we adopted HyTime both for linking and for the architectural forms mechanism. [As a side effect I became involved with Dr. Goldfarb and Dr. Newcomb with the development of the HyTime standard itself. You can ask my wife about “No, Charles.” sometime…]

For IBM ID Doc we defined a base set of elements that reflected the 25-or-so basic semantic elements that Eve had identified at that meeting at the OSF. The rest of the vocabulary was then build up from those base types. This layered architecture allowed the implementation of common processing while allowing local creation of new vocabulary to meet new requirements. Interchange and interoperation were preserved but the overall system became more flexible. This design was completed in about 1993 and implementation and use proceeded and continues to this day, although I understand that use of IDDoc is almost completely replaced by use of DITA within IBM. I left IBM in 1994. Don Day stayed.

Thus DITA reflects one ancestral branch from those early days of SGML application design.

Soon after or at the same time as the OSF meeting, another group of people founded the Davenport group, focused on standardizing Unix MAN pages. I was not directly involved in these meetings so I can’t comment on the details but their work became the basis for DocBook. I did attend one DocBook meeting sometime in the early 90’s (I remember I was still wearing suits per the IBM dress code, so it had to be before ’92 or ’93) and presented my attempt to use architectural forms to formally map DocBook to IDDoc and to try to plant the idea of architectural forms and layered architectures but I was not successful. I think I was seen mostly as a disruptive crank, which I probably was to some degree.

[From Fred Dalrymple’s LinkedIn page, on his time at OSF: "Designed the book style and created formatting tools for all OSF technical publications, published by Prentice-Hall. Led migration of OSF technical publications from legacy format (UNIX nroff/troff) to SGML, including definition of the OSF DTD and development of transformation tools. This work led directly to the creation of DocBook and the Topic Maps standard, ISO/IEC 13250:2000.”]

Don and Michael Priestley can give the history of the development of DITA within IBM after I left at the end of ’93 but the result is apparent today: the DITA we know and love.

In the ensuing decade between 93 and 2003 I became an editor of HyTime 2nd Edition and a founding member of the XML Working Group. I did a lot of client work developing custom SGML and XML vocabularies and tried to apply the same layered architectural model that we had defined at IBM. XML omitted the SGML features required for HyTime’s architectural forms mechanism (which is why DITA has the @class attribute it does) and the publication of the XML standard in 1997 made HyTime instantly obsolete (we published HyTime 2nd Edition in 1996, just in time for it be completely ignored by most people, although its influence is still felt in newer applications, including DITA, XLink, TEI, JATS, and DocBook).

When Don approached me in 2000 or 2001 about this DITA standard thing he was staring I was very eager to participate because I saw it as a potential way to fully realize many of the ideas I’d been working with over the previous decade or so.

[This is the end of the original posting. Obviously there is lots more history here but I think this provides some insight into how DITA and DocBook came to be. Would definitely like to hear the DocBook side of this story as I'm sure I've either omitted important events or misrepresented important aspects.]

Labels: , , , , , , , ,

Saturday, November 14, 2015

Trip Report: Tekom 2015, DITA vs Walled Garden CCMS Systems

[This was originally posted to the DITA Users Yahoo group. I'm posting it here for ease of future reference.]
This week I attended the Tekom 2015 conference in Stuttgart, Germany. This
is one of, if not the, largest technical documentation conferences in
Europe. Several of us from the DITA community were invited to speak,
including Kris Eberlein, Keith Schengili-Roberts, Jang Graat, Scott
Prentice, and Sarah O'Keefe. This is the second year that Tekom has had
dedicated DITA presentations, reflecting the trend of increasing use of
and interest in DITA in Europe.

DITA vs. Not-DITA

The theme for me this year was "DITA vs German CCMS systems".

For background, Germany has had several SGML- and XML-based component
content management system products available since the mid 90's, of which
Schema is probably the best known. These systems use their own XML models
and are highly optimized for the needs of the German machinery industry.
They are basically walled garden CCMS systems. These are solid tools that
provide a rich set of features. But they are not necessarily generalized
XML content management systems capable of working easily with any XML
vocabulary. These products are widely deployed in Germany and other
European countries.

DITA poses a problem for these products to the degree that they are not
able to directly support DITA markup internally, for whatever reason,
e.g., having been architected around a specific XML model such that
supporting other models is difficult.

So there is a clear and understandable tension between the vendors and
happy users of these products and the adoption of DITA. Evidence of this
tension is the creation of the DERCOM association
(http://www.dercom.de/en/dercom-home), which is, at least in part, a
banding together of the German CCMS vendors against DITA in general, as
evidenced by the document "Content Management and Struktured Authoring in
Technical Communication - A progress report", which says a number of
incorrect or misleading things about DITA as a technology.

The first DITA presentation of the conference was "5 Reasons Not to Use
DITA from a CCMS Perspective" by Marcus Kesseler, one of the founders of
Schema.

It was an entertaining presentation with some heated discussion but the
presentation itself was a pretty transparent attempt to spread fear,
uncertainty, and doubt (FUD) about DITA by using false dichotomies and
category errors to make DITA look particularly bad. This was unfortunate
because Herr Kesseler had a valid point, which came out in the discussion
at the end of his talk, which is that consultants were insisting that if
his product (Schema, and by extension the other CCMS systems like Schema)
could not do DITA to a fairly deep degree internally then they were
unacceptable, regardless of any other useful functionality they might
provide.

This is definitely a problem in that taking this sort of purist attitude
to authoring support tools is simply not appropriate or productive. While
we might argue architectural choices or implementation design options as a
technical discussion (and goodness knows I have over the years), it is not
appropriate to reject a tool simply because it is not DITA under the
covers. In particular, if a system can take DITA in and produce DITA back
out with appropriate fidelity, it doesn't really matter what it does under
the covers. Now whether tools like Schema can, today, import and export
the DITA you require is a separate question, something that one should
evaluate as part of qualifying a system as suited to task. But certainly
there's no technical barrier to these tools doing good DITA import and
export if it is in fact true, as claimed, that what they do internally is
*functionally* equivalent to DITA, which it may very well be.

In further discussions with Marcus and others I made the point that DITA
is first and foremost about interchange and interoperation and in that
role it has clear value to everyone as a standard and common vehicle for
interchange. To the degree that DERCOM, for example, is trying to define a
standard for interoperation and interchange among CCMS systems, DITA can
offer some value there.

I also had some discussions with writers faced with DITA--some
enthusiastic about it, some not--who were frustrated by the difficulty of
doing what they needed using the usual DITA tools as compared to the
highly-polished and mature features provided by systems like Schema. This
frustration is completely understandable--we've all experienced it. But it
is clearly a real challenge that German and, more generally, European
writing teams face as they adopt or consider adopting DITA and it's
something we need to take seriously.

One aspect of this environment is that people do not separate DITA The
Standard from the tools that support the use of DITA precisely because
they've had these all-singing, all-dancing CCMS systems where the XML
details are really secondary.

A DITA-based world puts the focus on the XML details, with tools being a
secondary concern. This leads to a mismatch of expectations that naturally
leads to frustration and misunderstanding. When people say things like
"The Open Toolkit doesn't do everything my non-DITA CCMS does" you know
there is an education problem.

This aspect of the European market for DITA needs attention from the DITA
community and from DITA tool vendors. I urged the writers I talked to to
talk to the DITA CCMS vendors to help them understand their specific
requirements, the things tools like Schema provide that they really value
(for example, one writer talked about support for creating sophisticated
links from graphics, an important aspect of machinery documentation but
not a DITA-specific requirement per-se). I also urged Marcus to look to
us, the DITA community, for support when DITA consultants make
unreasonable demands on their products and emphasized the use of DITA for
interchange. I want us all to get along--there's no reason for there to be
a conflict between DITA and non-DITA and maintaining that dichotomy is not
going to help anyone in the long term.

Other Talks

On Wednesday there was a two-hour "Intelligent Information" panel
consisting of me, Kris Eberlein, Markus Kesseler from Schema, and Torsten
Kuprat of Acolada, another CCMS vendor. Up until the end this was a
friendly discussion of intelligent information/intelligent content and
what it means, what it is and isn't, etc. At the end of the session we did
touch on the DITA vs. non-DITA arguments but avoided getting too
argumentative. But Kris and I both tried to push on the
standard-for-interchange aspect of intelligent content and information.

This panel was opposite a couple of other DITA presentations so I was
unable to attend those.

Keith Shengili-Roberts presented on the trends of DITA adoption around the
world, which was pretty interesting. While his data sources are crude at
best (LinkedIn profiles and job postings as well as self-reported DITA
usage) he clearly demonstrated a positive trend in DITA adoption around
the world and in Europe. I thought it was a nice counter to the
presentations of the day before.

Frank Ralf and Constant Gordon presented NXP's use of DITA and how they've
applied it to the general challenges of semiconductor documentation
management and production. It was a nice high-level discussion of what a
DITA-based system looks like and how such a system can evolve over time,
as well as some of the practical challenges faced.

My talk was on why cross-book links in legacy formats like DocBook and
Framemaker break when you migrate those documents to DITA: "They Worked
Before, What Happened? Understanding DITA Cross-Book Links"
(http://www.slideshare.net/drmacro/they-worked-before-what-happened-underst
anding-dita-crossbook-links). (Short version: you have to use the new
cross-deliverable linking features in DITA 1.3.)

George Bina presented on using custom Java URL handlers with custom URL
schemes to seamlessly convert non-XML formats into XML (DITA or otherwise)
in the context of editors like oXygenXML and processors like the DITA Open
Toolkit. He demonstrated treating things such as spreadsheets, Java class
files, and markdown documents as XML using URL references from otherwise
normal XML markup. Because the conversion is done by the URL handlers,
which are configured at the Java system level, the tools handling the XML
documents don't need to have any knowledge of the conversion tools. The
sample materials and instructions for using the custom "convert:" URL
scheme George has defined are available at
https://github.com/oxygenxml/dita-glass.

Wednesday's DITA events ended with a panel discussion on challenges faced
when moving to DITA, moderated by Sarah O'Keefe from Scriptorium and
featuring George Bina (Syncro Soft), Magda Caloian (Pantopix), and
Nolwenn Kezreho (IXIASOFT). It was an interesting discussion and
touched on some of the same tools and expectation mismatches discussed
earlier.

On Thursday, Jang Graat gave a tutorial titled "The DITA Diet": using DITA
configuration and constraints to tailor your use of DITA to eliminate the
elements you don't need. He also demonstrated a FrameMaker utility he's
developed that makes it easy to tailor DITA EDDs to reflect the
configuration and constraints you want.

Also on Thursday was the German-language version of the intelligent
content panel, with Sarah O'Keefe from Scriptorium representing the consultant
role. I was not present so can't report on what was said.

Tool and Service Vendors

One interesting new tool I saw (in fact the only new product I saw) was
the Miramo formatter Open Toolkit plugin, which is currently free for
personal use. It is a (currently) Windows-only formatter that competes
with products like Antenna House XSL Formatter and RenderX XEP. It is not
an FO implementation but offers comparable batch composition features. It
comes with a visual design tool that makes it easy to set up and modify
the composition details. This could be a nice alternative to hacking the
PDF2 transform. The server version price is comparable to the Antenna
House and XEP products. The tool is available at http://www.miramo.com. I
haven't had a chance yet to evaluate it but I plan to. I emphasized the
value of having it run on other platforms and the Miramo represented
thought it would be possible for them to support other platforms without
too much effort.

Adobe had their usual big booth, highlighting Framemaker 2015 with it's
new DITA 1.3 features. Syncro Soft had a bigger and more prominent booth
for oXygenXML. FontoXML had their booth and I think there was another
Web-based XML/DITA editor present but I didn't have a chance to talk to
them.

Of the usual DITA CCMS vendors, IXIASOFT was the only one at the
conference (at least that I noticed). SDL had a big booth but they
appeared to be focusing on their localization and translation products,
not on their CMS system.

I think the mix of vendors reflects a settling out of the DITA technology
offerings as the DITA products mature. The same thing happened in the
early days of XML. It will be interesting to see who is also at DITA
Europe next week.

Summary

All-in-all I thought Tekom was a good conference for me--I learned a lot
about the state of DITA adoption and support in Europe generally and
Germany specifically. I feel like I have a deeper understanding of the
challenges that both writers and tool vendors face as DITA gets wider
acceptance. Hopefully we can help resolve some of the DITA vs. not-DITA
tension evident at the conference. I got to talk to a lot of different
people as well as catch up with friends I only see at conferences (Kris
Eberlein and Sarah O'Keefe were joking about how, while they both live in
the same city, they only see each other at this conference).

It's clear to me that DITA is gaining traction in Europe and, slowly, in
Germany but that the DITA CCMS vendors will need to step up their game if
they want to compete head-to-head against entrenched systems like Schema
and Acolada. Likewise, the DITA community needs to do a better job of
educating both tools vendors and potential DITA users if we expect them to
be both accepting of DITA and successful in their attempts to implement
and use it.

I'm looking forward to next year. Hopefully the discussion around DITA
will be a little less contentious than this year.

Labels:

Sunday, January 26, 2014

DITA without a CMS: Tools for Small Teams

[This is a copy of a post I made to the Yahoo DITA Users list.]

A topic of discussion that comes up quite a bit (it came up at the recent Central Texas DITA User Group meeting) is how to "do DITA" without a CMS, by which we usually mean, how to implement an authoring and production workflow for a small team of authors with limited budget without going mad?

NOTE: I'm using the term "CMS" to mean what are often called a Content Component Management (CCM) systems.

This is something I've been thinking about and doing for going on 30 years now, first at IBM and then as a consultant. At IBM we had nothing more than mainframes and line-oriented text editors along with batch composition systems yet we were able to author and manage libraries of books with sophisticated hyperlinks within and across books and across libraries. How did we do it? Mostly through some relatively simple conventions for creating IDs and references to them and a bit of discipline on the part of writing teams. We were using pre-SGML structured markup back then but the principles still apply today.

As I say in my book, DITA for Practitioners, some of my most successful client projects have not had a CMS component.

Note that I'm saying this as somebody who has worked for and still works closely with a major CMS vendor (RSI Content Solutions). In addition, as a DITA consultant who wants to work with everybody and anybody, I take some risk saying things like this since a large part of the DITA tools market revolves around CMS systems (as opposed to editors or composition systems, where market has essentially resolved on a small set of mature providers that are unlikely to change anytime soon).

So let me make it clear that I'm not suggesting that you never need a CMS--in an enterprise context you almost certainly do, and even in smaller teams or companies, lighter-weight systems like EasyDITA, DITAToo, Componize, BlueStream XDocs, and DocZone can offer significant value within the limits of tight small-team budgets.

But for many teams the cost of a CMS will always be prohibitive, either for cost or time or both, especially at the start of projects. So there is a significant part of the DITA user community for whom CMS systems are either an unaffordable luxury or something to be worked toward and justified once an initial DITA project proves itself.

In addition, an important aspect of DITA is that you can get started with DITA and be productive very quickly without having to first but a CMS in place. Even if you know you need to have a CMS, you can start small and work up to it. I have seen many documentation projects fail because too much emphasis was put on implementing the CMS first.

Many people get the idea, for whatever reason, that a CMS is cost of entry for implementing DITA and that is simply not the case for many DITA users.

The net of my current thinking is that this tool set:
  • git for source content management
  • DITA Open Toolkit for output processing
  • Jenkins for centralized and distributed process automation
  • oXygenXML for editing and local production
allows you to implement an almost-complete, low-cost DITA authoring, management, and production system. Of these four tools, only one, oXygenXML, is commercial. If you use Github to host private repositories that has a cost but it's minimal.

In particular, the combination of git, Jenkins, and the Open Toolkit enables easy implementation of centralized, automatic build processing of DITA content. Platform-as-a-service (PaaS) providers like CloudBees, OpenShift, and Amazon Web Services provide free and low-cost options for quickly setting up central servers for things like Jenkins, web sites, and so on, with varying degrees of privacy and easy-to-scale options.

The key here is low dollar cost and low labor investment to get something up and running quickly.

This doesn't include the effort needed to customize and extend the OT to meet your specific output needs--that's a separate cost dependent entirely on your specific requirements. But the community continues to improve its support for doing OT customization and the tools are continually improving, so that should get easier as time goes on (for example, Leigh White's DITA for Print book from XML Press makes doing PDF customization much easier than it was before--it's personally saved me many hours in my recent PDF customization projects).

For each of these tools there are of course suitable alternatives. I've specified these specific tools because of their ubiquity, specific features, and ease of use. But the same approach and principles could be applied to other combinations of comparable tools.

OK, so on to the question of when must you have a CMS and when can you get by without one?

A key question is what services do CMS systems provide and how critical are they and what alternatives are available?

As in all such endeavors it's a question of understanding your requirements and matching those requirements to appropriate solutions. For small tech doc teams the immediate requirements tend to be:
  1. Centralized management of content with version control and appropriate access control
  2. Production of appropriate deliverables
  3. Increased reuse to reduce content redundancy
  4. Localization
Given that understanding of very basic small-team requirements, how do the available tools align to those requirements?

Since the question is CMS vs. ad-hoc systems built from the components described above, the main question is "What do CMS systems do and how do the ad-hoc tools compare?"

CMS systems should or do provide the following services:

1. Centralized storage of content. For many groups just getting all their content into a single storage repository is a big step forward, moving things off of people's personal machines or out of departmental storage silos.

2. Version management. The management of content objects as versions in time.

3. Access control. Providing controls over who can do what with different objects under what conditions.

4. Metadata management for content objects. The ability to add custom metadata to objects in order to enable finding or support specific business processes. This includes things like classification metadata, ownership or rights metadata, and metadata specific to internal business processes or data processing.

5. Search and retrieval of content objects. The ability to search for and reliably find content objects based on their content, metadata, workflow status, etc.

6. Management of media assets. The ability to manage non-XML assets (images, videos, etc.) used by the main content objects. This typically includes support for media object metadata, format conversion, support for variants, streaming, and so on. Usually includes features to manage the large physical data storage required. Sometimes provided by dedicated Digital Asset Management (DAM) systems.

7. Link management. Includes maintaining "where used" information about content and media assets, management of addressing details, and so on.

8. Deliverable production. Managing the generation of deliverables from content stored in the CMS, e.g. running the Open Toolkit or equivalent processes.

These are all valuable features and as the volume of your content increases, as the scope of collaboration increases, and as the complexity of your re-use and linking increases, you will certainly need systems that provide these features. Implementing these services completely and well is a hard task and commercial systems are well worth the cost once you justify the need. You do not want to try to build your own system once you get to that point.

In any discussion like this you have to balance the cost of doing it yourself with the cost of buying a system. While it's easy to get started with free or low-cost tools, you can find yourself getting to a place where the time and labor cost of implementing and maintaining a do-it-yourself system is greater than the cost of licensing and integrating a commercial system. Scope creep is a looming danger in any effort of this scope. Applying agile methods and attitudes is highly recommended.

The nice thing about DITA is that, if you don't do anything too tool-specific, you should be able to transition from a DIY system to a commercial one with minimum abuse to your content. That's part of the point of XML in general and DITA in particular.

Also, keep in mind that DITA has been explicitly architected from day 1 to not require any sort of CMS system--everything you need to do with DITA can be done with files on the file system and normal tools. There is no "magic" in the DITA design (although it may feel like it to tool implementors sometimes).

So how far can you get without a dedicated CMS system?

I suggest you can get quite a long ways.

Services 1, 2, and 3: Basic data management

The first three services: centralized storage, version management, and access control, are provided by all modern source code management (SCM) tools, e.g. Subversion, git, etc. With the advent of free and low-cost services like Github, there is essentially zero barrier to using modern SCM systems for managing all your content. Git and Github in particular make it about as easy as it could be. You can use Github for free for public repositories and at a pretty low cost for private repositories or you can easily implement internal centralized git repositories within an enterprise if you have a server machine available. There are lots of good user interfaces for git, including Github's own client as well as other open-source tools like SourceTree.

Git in particular has several advantages:
  • It is optimized to minimize redundant storage and network bandwidth. That makes it suitable for managing binaries as well as XML content. Essentially you can just put everything in git and not worry about it.
  • It uses a distributed repository model, in which each user can have a full copy of the central repository to which they can commit changes locally before pushing them to the central repository. This means you can work offline and still do incremental commits of content. Because git is efficient with storage and bandwidth, it's practical to have everything you need locally, minimizing dependency on live connections to a central server.
  • Its branching model makes working with complex revision workflows about as easy as it can be (which is not very but it's an inherently challenging situation).

Service 4: Metadata Management

Here DITA provides its own solution in that DITA comes out of the box with a robust and fully extensible metadata model, namely the element. You can put any metadata you need in your maps and topics, either by using <data> with the @name attribute or by creating simple specializations that add new metadata element types tailored to your needs. For media assets you can either create key definitions with metadata that point to media objects or use something like the DITA for Publishers <art> and <art-ph> elements to bind <data> elements to references to media objects (unfortunately, the element does not allow <data> as a direct child through DITA 1.3).

In addition, you can use subject schemes and classification maps to impose metadata onto anything if you need to.

It is in the area of metadata management that CMS systems really start to demonstrate their value. If you need sophisticated metadata management then a CMS is probably indicated. But for many small teams, metadata management is not a critical requirement, at least not at first.

Service 5: Search and Retrieval

This is another area where CMS systems provide obvious value. If you have your content in an SCM it probably doesn't provide any particular search features.

But you can use existing search facilities, including those built into modern operating systems and those provided by your authoring tools (e.g., Oxygen's search across files and search within maps features). Even a simple grep across files can get you a long way.

If you have more implementation resources you can also look at using open-source full-text systems or XML databases like eXist and MarkLogic to do searching. It takes a bit more effort to set up but it might still be cheaper than a dedicated CMS, at least in the short term.

If your body of content is large or you spend a lot of time trying to find things or simply determining if you do or don't have something, a commercial CMS system is likely to be of value. But if your content is well understood by your authors, created in a disciplined way, and organized in a way that makes sense, then you may be able to get by without dedicated search support for quite a long time.

In addition, you can do things with maps to provide catalogs of components and so on. Neatness always counts and this is an area where a little thought and planning can go a long way.

Service 6: Management of Media Objects

This depends a lot on your specific media management requirements, but SCM systems like git and Subversion can manage binaries just fine. You can use continuous integration systems like Jenkins and open-source tools like ImageMagick to automate format conversion, metadata extraction, and so on.

If you have huge volumes of media assets, requirements like rights management, complex development workflows, and so on, then a CMS with DAM features is probably indicated.

But if you're just managing images that support your manuals, you can probably get by with some well-thought-out naming and organizational conventions and use of keys to reference your media objects. 

Service 7: Link Management

This is the service where CMS systems really shine, because without a dedicated, central, DITA-aware repository that can maintain real-time knowledge of all links within your DITA content, it's difficult to answer the "where-used" question quickly. It's always possible to implement brute-force processing to do it, but SCM systems like Subversion or git are not going to do anything for you here out of the box. It's possible to implement commit-time processing to capture and update link information (which you could automate with Jenkins, for example,) but that's not something a typical small team is going to implement on their own.

On the other hand, by using clear and consistent file, key, and ID naming conventions and using keys you can make manual link management easier--that's essentially what we did at IBM all those years ago when all we had were stone knives and bear skins. The same principles still apply today.

An interesting exercise would be to use a Jenkins job to maintain a simple where-used database that's updated on commit to the documentation SCM repository. It wouldn't be that hard to do.

8. Deliverable production

CMS systems can help with process automation and the value of that is significant. However, my recent experience with setting up Jenkins to automate build processes using the Open Toolkit makes it clear that it's now pretty easy to set up DITA process automation with available tools. It takes no special knowledge beyond knowing how to set up the job itself, which is not hard, just non-obvious.

Jenkins is a "continuos integration" (CI) server that provides general facilities for running arbitrary processes trigged by things like commits to source code repositories. Jenkins is optimized for Java-based projects and has built-in support for running Ant, connecting to Subversion and git repositories, and so on. This means you can have a Jenkins job triggered when you commit objects to your Subversion or git repository, run the Open Toolkit or any other command you can script, and either simply archive the result within the Jenkins server or transfer the result somewhere else. You can implement various success/failure and quality checks and have it notify you by email or other means when something breaks. Jenkins provides a nice dashboard for getting build status, investigating builds, and so on. Jenkins is an open-source tool that is widely used within the Java development community. It's easy to install and available in all the cloud service environments. If your company develops software it's likely you already use Jenkins or an equivalent CI system that you could use to implement build automation.

My experience using CloudBees to start setting up a test environment for DITA for Publishers was particularly positive. It literally took me minutes to set up a CloudBees account, provision a Jenkins server, and set up a job that would be able to run the OT. The only thing I needed to do to the Jenkins server was install the multiple-source-code-management plugin, which just means finding it in the Jenkins plugin manager and pushing the "install" button. I had to set up a github repository to hold my configured Open Toolkit but that also just took a few minutes. Baring somebody setting up a pre-configured service that does exactly this, it's hard to see how it could be much easier.

I think that Jenkins + git coupled with low-cost cloud services like CloudBees really changes the equation, making things that would otherwise be sufficiently difficult as to put off implementation easy enough that any small team should be able to do it well within the scope of resources and time they have.

This shouldn't worry CMS vendors--it can only help to grow the DITA market and help foster more DITA projects that are quickly successful, setting the stage for those teams to upgrade to more robust commercial systems as they're requirements and experience grows. Demonstrating success quickly is essential to the success of any project and definitely for DITA projects undertaken by small Tech Doc teams who always have to struggle for budget and staff due to the nature of Tech Doc as a cost center. But a demonstrated DITA success can help to make the value of high-quality information clearer to the enterprise, creating opportunities to get more support to do more, which requires more sophisticated tools.

Labels:

Sunday, August 11, 2013

Monastic SGML: 20 Years On

In 1993 I was working at IBM with Wayne Wohler, Don Day, Simcha Gralla, and others on IBM ID Doc, the SGML replacement for IBM's GML-based Bookmaster application, which was used for all of IBM's product documentation and much of its internal documentation. Wayne worked for IBM Publishing solutions and had been one of the developers of IBM's SGML processing tool set, having taken Charles Goldfarb's original SGML parser implementation and reworked it into something appropriate for an IBM product (Charles was also an IBM employee during the time he developed the SGML standard and HyTime). Wayne had also been involved in various efforts to develop or adapt visual editors for editing GML and SGML. At the time, Wayne and I were also developing the specifications for a general authoring support system that would manage SGML, allow editing, and so on.

IBM had been doing pretty sophisticated content reuse even back in the 80's using what facilities there were in the IBM Document Composition Facility (DCF), which was the underpinning for the Bookmaster application. So we understood the requirements for modular content, sharing of small document components among publications, and so on.

We were also trying to apply the HyTime standard to IBM ID Doc's linking requirements and I was starting to work with Charles Goldfarb and Steven Newcomb on the 2nd edition of the HyTime standard.

Out of that work we started to realize that SGML, with its focus on syntax and its many features designed to make the syntax easy to type, made SGML difficult to process in the context of things like visual editors and content management systems, because they imposed sequential processing requirements on the content.

We started to realize that for the types of applications we were building, a more abstract, node-based way of viewing SGML was required and that certain SGML features got in the way of that.

Remember that this was in the early days of object-oriented programming so the general concept of managing things as trees or graphs of nodes was not as current as it is now. Also, computers were much less capable, so you couldn't just say "load all that stuff into memory and then chew on it" because the memory just wasn't there, at least not on PCs and minicomputers. For comparison, at that time, it took about 8 clock hours on an IBM mainframe to render a 500-page manual to print using the Bookmaster application. That was running over night when the load on the mainframe was relatively low.

Out of this experience Wayne and I developed the concept of "monastic SGML", which was simply choosing not to use those features of SGML that got in the way of the kind of processing we wanted to do.

We presented these ideas at the SGML '93 conference as a poster. That poster, I'm told, had a profound effect on many thought leaders in the SGML community and helped start the process that led to the development of XML. I was invited by Jon Bosak to join the "SGML on the Web" working group he was forming specifically because of monastic SGML (I left IBM at the end of 1993 and my new employer, Passage Systems, generously allowed me to both continue my SGML and HyTime standards work and join this new SGML on the Web activity, as did my next employer, ISOGEN, when I left Passage Systems in 1996).

For this, the 20th anniversary of the presentation of monastic SGML to world, Debbie Lapeyre asked if I could put up a poster reflecting on monastic SGML at the Balisage conference. I didn't have any record of the poster with me and Debbie hadn't been able to find one in years past, but I reached out to Wayne and he dug through his archives and found the original SGML source for the poster. I've reproduced that below. I was able to post the original monastic SGML poster. These are my reflections.

The text of the poster is here:

Monastic SGML

Objective

Facilitate reuse of document fragments by enabling more reliable validation of document fragments without knowing all contexts in which they are used. Secondary objective: Remove sequential processing biases from datastream whereever possible.

Assumptions

Document fragments contain a single element and its content representing a proper subtree of a document and this element is valid in every point at which the fragment is referenced.

Rules

  • Don't use inclusions except on the root element, don't use exclusions
    Inclusions and exclusions can have the effect of invalidating the content of an element in one context while it remains valid in another.
  • Do not define short reference maps in the DTD
    Short references can change the recognition of delimiters based on context which can make a fragment invalid in one context while not in another.
    Other reasons to avoid them:
    • If USEMAP declarations occur in an instance, they are inherently sequential.
    • Short references can be used to obscure the true meaning of the markup in a given context.
  • Don't use #CURRENT attributes in the DTD
    #CURRENT attribute's use of values from prior specifications can make the first occurance of a fragment invalid.
    Other reasons to avoid them:
    • This construct is inherently sequential.
  • Avoid the use of IGNORE/INCLUDE marked sections
    These marked section types make it impossible to validate the information without
    • knowing all valid combinations of conditions for all using document
    • modifying all using documents to set these conditions

If you compare XML to these rules, you can see that we certainly applied them to XML, and a lot more.

Inclusions and exceptions were a powerful, if somewhat dangerous feature of SGML DTDs, in which you could define a content model and then additionally either allow elements types that would be valid in any context descending from the element being declared (inclusions) or disallow elements types from any descendant context (exclusions). Interestingly, RelaxNG has almost this feature because you can modify base patterns to either allow additional things or disallow specific things, the difference being that the inclusion or exception only applies to the specific context, not to all descendant contexts, which was the really evil part of inclusions and exceptions. Essentially, inclusions and exceptions were a syntactic convenience that let you avoid more heavily-parameterized content models or otherwise having to craft your content models for each element type.

In DITA, you see this reflected in the DTD implementation pattern for element types where every element type's content model is fully parameterized in a way that allows for global extension (domain integration) and relatively easy override (constraint modules that simply redeclare the base content-model-defining parameter entity). DocBook and JATS (NLM) have similar patterns.

Short references allowed you to effectively define custom syntaxes that would be parsed as SGML. It was a clever feature intended to support the sorts of things we do today with Wiki markup and ASCII equation markup and so on. In many cases it allowed existing text-based syntaxes to be parsed as SGML. It was driven by the requirement to enable rapid authoring of SGML content in text editors, such as for data conversion. That requirement made sense in 1986 and even in 1996, but is much less interesting now, both because ways of authoring have improved and because there are more general tools for doing parsing and transformation that don't need to be baked into the parser for one particular data format. At the time, SGML was really the only thing out there with any sort of a general-purpose parser.

One particularly pernicious feature of shortref was that you could turn it on and off within a document instance, as we allude to in our rules above. This meant that you had to know what the current shortref set was in order to parse a given part of the document. That works fine for sequential parsing of entire documents, but fails in the case of parsing document fragments out of any large document context.

The #CURRENT default option for SGML attributes allowed you say that the last specified value for the attribute should be used as the default value. This feature was problematic for a number of reasons, but it definitely imposed a sequential processing requirement on the content. This is a feature we dropped from XML without a second thought, as far as I can remember. The semantics of attribute value inheritance or propagation are challenging at best, because they are always dependent on the specific business rules of the vocabulary. During the development of HyTime 2 we tried to work out some general mechanism for expressing the rules for attribute value propagation and gave up. In DITA you see the challenge reflected in the rules for metadata cascade within maps and from maps to topics, which are both complex and somewhat fuzzy. We're trying to clarify them in DITA 1.3 but it's hard to do. There are many edge cases.

XML still has include and ignore marked sections, but only in DTD declarations. In SGML they could go in document instances, providing a weak form of conditional processing. But for obvious reasons, that didn't work well in an authoring or management context. Modern SGML and XML applications all use element-based profiling, of course. Certainly once SGML editors like Author/Editor (now XMetal) and Arbortext ADEPT (now Arbortext Editor) were in common use, the use of conditional marked sections in SGML content largely went away.

Looking at these rules now, I'm struck by the fact that we didn't say anything about DTDs in general (that is, the requirement for them) nor anything about the use of parsed entities, which we now know are evil. We didn't say anything about markup minimization, which was a large part of what got left out of XML. We clearly still had the mind set that DTDs were either a given or a hard requirement. We no longer have that mind set.

SGML did have the notion of "subdoc" but it wasn't fully baked and it never really got used (largely because it wasn't useful, although well intentioned). You see the requirement reflected today in things like DITA maps and conref, XInclude, and similar element-based, link-based use-by-reference features. The insight that I had (and why I think XInclude is misguided) is that use-by-reference is an application-level concern, not a source-level concern, which means it's something that is done by the application, as it is in DITA, for example, and not something that should be done by the parser, as XInclude is. Because it is processed by the parser, XInclude ends up being no better than external parsed entities.

If we look at XML, it retains one markup minimization feature from SGML, default attributes. These require DTDs or XSDs or (now) RelaxNGs that use the separate DTD compatibility annotations. Except for #CURRENT, which is obviously a very bad idea, we didn't say anything about attribute defaults. I think this reflects the fact that default attributes are simply such a useful feature that they must be retained. Certainly DITA depends on them and many other vocabularies do as well, especially those developed for complex documentation.

But I can also say from personal experience that defaulted attributes still cause problems for content management, since if you have a document that does not have all the attributes in the instance and, as for DITA, you require certain attributes in order to support specific processing (e.g., specialization-aware processing) then if you don't process your documents in the context of a schema that provides the attributes, processing will fail, sometimes apparently randomly and for non-obvious reasons (at least to those not familiar with the specific attribute-based processing requirements of the document).

I later somewhat disavowed monastic SGML because I felt it put an unnecessary focus on syntax over abstraction. As I further developed my understanding of abstractions of data as distinct from their syntactic representations, I realized that the syntax to a large degree doesn't matter, and that our concerns were somewhat unwarranted because once you parse the SGML initially, you have a normalized abstract representation that largely transcends the syntax. If you can then store and manage the content in terms of the abstraction, the original syntax doesn't matter too much.

Of course, it's not quite this simple if, for example, you need to remember things like original entity references or CDATA marked sections or other syntactic details so that you can recreate them exactly. So I think my disavowing may have been perhaps itself somewhat misguided. Syntax still matters, but it's not everything. At this year's Balisage there were several interesting papers focusing on the syntax/semantics distinction and, for example, defining general approaches for treating any syntax as XML and what that means or doesn't mean.

I for one do not miss any of the features of SGML that we left out of XML and am happy, for example, to have the option of not using DTDs when I don't need or want them or want to use some other document constraint language, like XSD or RelaxNG. Wayne and I were certainly on to something and I'm proud that we made a noticeable contribution to the development of XML.

For the historical record, here is the original SGML source for the poster as recovered from Wayne's personal archive:
<h1>Monastic SGML
<h5>Objective
<p>Facilitate reuse of document fragments by enabling more reliable
validation of document fragments without knowing all contexts in which
they are used.
Secondary objective&colon; Remove
sequential processing biases from datastream whereever possible.
<h5>Assumptions
<p>Document fragments contain a single element and its content
representing a proper subtree of a document and
this element is valid in every point at which the fragment is referenced.
<h2>Rules
<ul>
<li>Don't use inclusions except on the root element, don't use exclusions
<p>Inclusions and exclusions can have the effect of invalidating the
content of an element in one context while it remains valid in another.
<li>Do not define short reference maps in the DTD
<p>Short references can change the recognition of delimiters based on
context which can make a fragment invalid in one context while not in
another.
<p>Other reasons to avoid them:
<ul compact>
<li>If USEMAP declarations occur in an instance, they are inherently
sequential.
<li>Short references can be used
to obscure the true meaning of the markup in a given
context.
</ul>
<li>Don't use #CURRENT attributes in the DTD
<p>#CURRENT attribute's use of values from prior specifications
can make the first occurance of a fragment invalid.
<p>Other reasons to avoid them:
<ul compact>
<li>This construct is inherently sequential.
</ul>
<li>Avoid the use of IGNORE/INCLUDE marked sections
<p>These marked section types make it impossible to validate the
information without
<ul compact>
<li>knowing all valid combinations of conditions for all using
document
<li>modifying all using documents to set these conditions
</ul>
</ul>

Labels: , , , , ,

Sunday, February 20, 2011

Physical Improvement for Geeks: The Four Hour Body

I've just read through all of Tim Ferriss' The Four Hour Body (http://fourhourbody.com/) (4HB). Short version of review: found it really interesting and helpful and generally to be full of sound advice and guidance provided with a dose of humor. I am starting on the book's Slow Carb Diet (SCD) in an attempt to lose 20lbs of mostly visceral fat (read "lose my beer gut" and try to live to see my daughter graduate from college).

The book is written from a geek's perspective for geeks. It essentially takes an engineering approach to body tuning based on self experimentation, measurement, and application of sound scientific principles. In a post on the 4HB blog Tim captures the basic approach and purpose of the book:

"To reiterate: The entire goal of 4HB is to make you a self-sufficient self-experimenter within safe boundaries. Track yourself, follow the rules, and track the changes if you break or bend the rules. Simple as that. That’s what I did to arrive at my conclusions, and that’s what you will do — with a huge head start with the 4HB — to arrive at yours."

I've done Atkins in the past with some success so I know that for me a general low-carb approach will work. The Slow Carb Diet essentially takes Atkins and reduces it to the essential aspects that create change. The biggest difference between Atkins and the SCD is the SCD eliminates all dairy because of its contribution to insulin spiking despite a low glycemic index. So no cheese or sugar-free ice cream (which we got really good at making back in our Atkins days). The SCD also includes a weekly "cheat day" where you eat whatever crap you want, as much as you can choke down. After 6 days I've lost 3.5 lbs, which is about what I would expect at the start of a strict low-carb diet. I haven't had the same degree of mind alteration that I got from the Atkins induction process, which is nice, because that was always a pretty rough week for everybody.

What I found interesting about the 4HB was that Tim is simply presenting his findings and saying "this worked, this didn't, here's why we think this did or didn't work." He's not selling a system or pushing supplements or trying to sell videos. His constant point is "don't take my word for it, test it yourself. I might be spouting bullsh*t so test, test, test."

As an engineer that definitely resonated with me. He also spends a lot of time explaining why professional research is often useless, flawed, biased, or otherwise simply not helpful, if not downright counterproductive. As somebody who's always testing assumptions and asking for proof I liked that too.

He even has an appendix where he presents some data gathered from people who used the SCD, which, as presented suggested some interesting findings and made the diet look remarkably effective. He then goes through the numbers and shows why the numbers are deceptive and can't be trusted in a number of ways. If his intent was to sell the diet he would have just presented the numbers. Nice.

His focus is as much on the mental process as on the physical process: measure, evaluate, question, in short, think about what you're doing and why. Control variables as much as possible in your experiments.

I highly recommend the book for anyone who's thinking about trying to lose weight or improve their physical performance in whatever way they need to--Ferriss pretty much covers all bases, from simple weight and fat loss to gaining muscle, improving strength, etc.

He has two chapters focused on sexual improvements, one on female orgasm and one on raising testosterone levels, sperm count, and general libido in males. These could have come off as pretty salacious and "look what at what a sex machine I've become" but I didn't read them that way. Rather his point was that improving the sexual aspects of ones life is important to becoming a more complete person--it's an important part of being human so why not enjoy it to its fullest? I personally went through a male fertility issue when my wife and I tried to start a family and if I'd had the chapter on improving male fertility at that time (and if my fertility had actually been relevant) it would have been a godsend. One easy takeaway from that chapter: if you want kids don't carry an active cell phone in your pocket.

An interesting chapter on sleep: how to get better sleep, how to need less sleep, etc. Some interesting and intriguing stuff there as well. Some simple actions that might make significant positive changes in sleep patterns, as well as a technique for getting by on very little sleep if you can maintain a freaky-hard nap schedule.

Overall I found the book thoughtful, clearly written, engaging and entertaining and generally helpful. I found very few things that made me go "yeah right" or "oh please" or any of the reactions I often have to self help books. He stresses being careful and responsible and having a clear undestanding of what your goal is. In short, sound engineering practice applied to your physical self.

Dr. Macro says check it out.

Labels:

Saturday, February 19, 2011

Chevy Volt Adventure: Feb Diagnostic Report

Just got the February vehicle diagnostic report email from the Volt. I'm not sure why I find it so cool that my car can send me email, but I do.

The salient numbers are:

35 kW-hr/100 miles

1 Gallon of gasoline used. [This is actually an overstatement as we have only used 0.2 gallons since returning from our Houston trip at the end of December.]

Our electricty usage for January (the latest numbers I have) was (numbers in parens are for Jan 2010):

Total kW-hr: 954 (749)
Grid kW-hr: 723 (455)
Solar kW-hr: 231 (294)
Dollars billed: $58.37 ($35.12)

$/kWh used: $0.06 ($59.00/954)

kWh/mile: 0.35 (35kWh/100miles)

$/mile: $0.02

Our bill for Dec was $32.00, so we spent an extra $26.00 on electricity in January, some of which can be attributed to the unusually cold winter we've been having. We also produced about 60kWh less this January than last.

But if we assume that most of the difference was the Volt, that means it cost us about $20.00 to drive the vehicle for the month. We used essentially no gasoline so the electricity cost was our total operating cost.

Looking at the numbers it also means that the draw from the car is less than or roughly equal to the solar we produced over the same period. Not that much of that solar went to actually charging the Volt since we tend to charge later in the day or over night after having done stuff during the day, but if Austin Energy actually gave us market rates for our produced electricity rather than the steep discount they do give us, we could truthfully say we have a solar powered car, even in January. For contrast, our maximum solar production last year was 481 kWh in August, with numbers around 400 kWh most months.

Compare this cost with a gasoline vehicle getting 30 mpg around town at $3.00/gallon (current price here in Austin):

30 miles/gallon = 0.03 gallons/mile * $3.00/gallon =

$/mile: 0.09

However, our other car, a 2005 Toyota Solar only gets about 22 mpg around town, which comes out to

$/mile: 0.15

Of course these numbers only reflect direct operating cost, not the cost of our PV system or the extra cost of the Volt itself relative to a comparable gas-powered vehicle, but that's not the point is it? Because it's not just lowered operating cost but being a zero-emissions vehicle most days and using (or potentially using) more sustainable sources of energy.

But another interesting implication here is what would happen (or will happen) when the majority of vehicles are electric? If our use is typical, it means about a 25% increase in electricity consumption just for transportation. What does that mean for the electricity infrastructure? Would we be able in the U.S. to add 25% more capacity in say 10 years without resorting to coal? How much of that increase can be met through conservation? It seems like it could be a serious challenge for the already-straining grid infrastructure, something we know we need to address simply to make wind practical (because of the current nature of the U.S. grid).

If Chevy and the other EV manufacturers can bring the cost down, which they inevitably will, people are going to flock to these cars because they're fun to drive, cheaper to operate, and better for the air. Given the expected rate of advance in battery technology and the normal economies of scale, it seems reasonable to expect the cost of electric vehicles to be comparable to gasoline vehicles in about 5 years. If gas prices rise even $1.00/gallon in that time, which seems like a pretty safe bet (but then I would have expect gas to be at $5.00/gallon by now after it's spike back in 2008), then the attractiveness of electric vehicles will be even greater.

Which is all to say that I fully expect EVs like the Volt to catch on in a big way in about 5 years, which I think could spell, if not disaster, then at least serious strain in the U.S. electricity infrastructure. I know the City of Austin is thinking about it because that's their motivation for paying for our charging station: monitor the draw from the car so they can plan appropriately. But are we doing that a national level? I have no idea, but history does not instill confidence, let us say.

Labels:

Wednesday, January 19, 2011

Chevy Volt Adventure: Fun to Drive

We've been driving the Volt around town now for a few weeks and the biggest surprise to me is how much fun it is to drive. The instant acceleration, freaky smoothness, and weight-enhanced handling make it a lot of fun to drive. You can zip around, corner hard, and do it all without fuss or noise. And we haven't even tried sport mode yet.

As for the car itself, it seems to be holding up well--I haven't noticed anything particularly tinny or annoying, with the possible exception of the charge port cover, which seems a little weak but then it's just a little cover, but the latch is a little less aggressive than I'd like--a couple of times I've thought I pushed it closed but it hadn't caught.

We are clearly not driving in the most efficient manner because our full-charge electric range is currently estimated at about 30 miles, which our Volt Assistant at GM assures us reflects our profligate driving style and not an issue with reduced battery capacity.

As a family car it's working fine. With our around-town driving we've only had to use a fraction of a gallon of gas when we've forgotten to plug in after a trip. So our lifetime gas usage total is about 8.6 gallons, of which 8.5 were used on the round trip to Houston.

Labels:

Tuesday, January 04, 2011

Chevy Volt Adventure: Houston Trip 1

On Christmas Eve we loaded up the Volt and headed to Grandma's house in Houston.
IMG_0948
The picture shows the cargo area loaded for the trip. The cargo space is a little cramped but was able to accomodate what we needed for this trip, including all the gifts. It would be hard pressed to hold three full-sized rollaboards.

In the car we had me, my wife, our daughter, and our dog, Humphrey (a basset hound). Everyone was comfortable but this is definitely a 4-passenger vehicle because of the bucket seats in back. The seats were reasonably comfortable for a 3-hour trip, comparable to what I'm used to from our other car, a 2005 Toyota Solara convertible.

The total round trip from our house to Grandma's house is about 450 miles. The trip meter reports we used 8.1 gallons for a trip MPG of about 51, which is pretty good.

In our Solara, which averages about 22 MPG overall and gets probably 30 or so on the highway, we usually fill up at the halfway point out and back, using a full 15-gallon tank over the course of the trip. On this trip we didn't stop to fill up until the return, when the tank showed 3/4 empty. I put in about 6 gallons but I think the tank didn't fill (it was the first time I'd put gas in so I had no idea how much to expect to need—the tank must be 10 gallons if 3/4 reflected an 8-gallon deficit).

On the way out the battery lasted from Austin to just outside Bastrop, about 30 miles. It's clear that, as expected, highway speeds are less efficient than around-town speeds. I'd be interested to know what the efficiency curve is: is it more or less linear or, more likely, curves sharply up above say 50 MPH. My intuition says 40 MPH is the sweet spot. I tried to keep it between 60 and 70 for most of the trip (the posted limit for most of the trip is 70). I drove a little faster on the way home having realized that it didn't make much difference in efficiency.

Highway driving was fine. The car is heavy for its size, with the batteries distributed along the main axis, which makes it handle more like a big car than the compact it is. Highway 71 is pretty rough in places but the car was reasonably quiet at 70. When we left I-10 in Houston there was enough accumulated charge to use the battery for the couple of miles to my mother-in-law's house.

It definitely has power to spare and plenty of oomph. There's no hesitation when you stamp the accelerator and I had no problem going from 45 to 65 almost instantly to get from behind a slow car on I-10. We have yet to try the "sport" driving mode but now I'm almost afraid to.

The car is really smooth to drive--like driving an electric golf cart in the way it just smoothly takes off and doesn't make any noise.

If we had a problem it was the underbuilt electrical circuit at Grandma's that served the garage—at one point when we had the car plugged in and charging the circuit breaker flipped (a 15-amp circuit)—turned out the circuit also served most of the kitchen, where we were busy preparing Christmas dinner.

If there is any practical issue with the vehicle it's the climate control—it takes a lot of energy to heat it. Houston was having a cold snap so we got to test the heating system. The multi-position seat heaters are nice but keeping the controls on the "econ" setting meant that backseat passengers sometimes got a little chilled. You do realize how much waste heat gas engines produce when you don't have it available to turn your car into a sauna.

It was also weird to get back from a drive and realize that the hood is still cold.

We spent the last week traveling in the Northwest and rented the cheapest car Enterprise offers, which turned out to be a Nissan Versa, a tinny little econobox. The contrast was dramatic and made me appreciate the Volt. The two vehicles are comparable in size and capacity (but not cost, of course), but the Versa had a hard time making it up to highway speed and sounded like the engine might come out or explode under stress or blow off the road in a stiff breeze.

Now that we're back to our normal workaday life we'll see how it does in our normal around-town driving, but my expectation is that we'll use very little, if any, gas as we seldom need to go more than 10 miles from home (our longest usual trip is up north to Fry's, which is about a 20-mile round trip). We'll probably take it out to Llano and Lockhart for BBQ if we get a warm weekend in the next month or so.

On the way back from Houston we ended up near a Prius and ran into them at the gas station. They were interested in how the Volt was working and we got to compare MPG and generally be smug together. I ended up following them the rest of the way into Austin, figuring they probably reflected an appropriately efficient speed.

And I'm still getting a kick out of plugging it in whenever I bring it back home.

Labels:

Tuesday, December 21, 2010

Chevy Volt Adventure

My family is now the second (in Texas or Austin, not 100% sure) to take delivery of a 2011 Chevy Volt. We got it last night and it's sitting in the carport happily charged.

The car is very cool, very high tech. It sends you status emails. It chides you for jackrabbit starts (although I gather other electric and hybrid vehicles do as well).

It is freaky quiet in electric mode, a bit rumbly in extended mode.

The interior is pretty nice, reasonably well laid out, nicely detailed. The back seat is reasonably comfortable (I have the torso of a 6-foot person and my head cleared the back window).

Accelerates snappily in normal driving mode (haven't had a chance to try the "sport" mode yet). Handles pretty nicely (the batteries are stored along the center length of the vehicle, giving it pretty good balance).

We'll be driving it to Houston, about 500 miles round trip, in a couple of days. I'll report our experience.

Early adopters get some perks. We get 5 years of free OnStar service. We get a free 240v charging station from the City of Austin at the cost of letting them monitor the energy usage of the charger. We get a special parking space at the new branch library near us. The Whole Foods flagship store has charging stations--might actually motivate me to shop there (we normally avoid that Whole Foods because it's really hard to park and you know, it's Whole Foods).

One thing that will take some getting used to is not having to put a key into it in order to operate it. I kept reflexively reaching toward the steering column to remove the key that wasn't there.

Here's a question for you Electrical Engineers out there: what is the equivalent to miles per gallon for an electric vehicle? Is it miles per megajoule? miles per amp-hour?

I'm trying to remember what the unit of potential electrical energy is and coming up blank (not sure I ever really knew).

Oh, and since we have a PV system on the house and can control when charging takes place, I am going to claim that this Volt is a solar powered vehicle.

Labels:

Wednesday, September 01, 2010

Norm Reconsiders DITA Specialization

Norm Walsh has published a very interesting post to his blog, Reconsidering specialization, part the first.

This is very significant and I eagerly await Norm's thoughts.

As Norm relates in his post, he and I had what I thought was a very productive discussion about specialization and what it could mean in a DocBook context. I think Norm characterized my position accurately, namely that the essential difference between DocBook and DITA is specialization and that makes DITA better.

Here by "better" I mean "better value for the type of applications to which DITA and DocBook are applied". It's a better value because:

1. Specialization enables blind interchange, which I think is very important, if not of utmost importance, even if that interchange is only with your future self.

2. Specialization lowers the cost of implementing new markup vocabularies (that is, custom markup for a specific use community) roughly an order of magnitude easier.

There's more to it than that, of course, but that's the key bits.

All the other aspects of DITA that people see as distinguishing: modularity, maps, conref, etc., could all be replicated in DocBook.

If we assume that DITA's more sophisticated features like maps and keyref and so forth are no more complicated than they need to be to meet requirements, then the best that DocBook could do is implement the exact equivalent of those features, which is fine. So to that degree, DocBook and DITA are (or could be) functionally equivalent in terms of specific markup features. (But note that any statement to the effect that "DITA's features are too complicated" reflects a lack of understanding of the requirements are that DITA satisfies--I can assure you that there is no aspect of DITA that is not used and depended on by at least one significant user community. That is, any attempt, for example, to add a map-like facility to DocBook that does not reflect all the functional aspects of DITA maps will simply fail to satisfy the requirements of a significant set of potential users.)

But note that currently DocBook and DITA are *not* functionally equivalent: DocBook lacks a number of important features needed to support modularity and reuse. But I don't consider that important. What really matters is specialization.

Note also that I'm not necessarily suggesting that DocBook adapt the DITA specialization mechanism exactly as it's formulated in DITA. I'm suggesting that DocBook needs the functional equivalent of DITA's specialization facility.

Note also that DocBook as currently formulated at a content model level probably cannot be made to satisfy the constraints specialization requires in terms of consistency of structural patterns along a specialization hierarchy and probably lacks a number of content model options that you'd want to have in order to support reasonable specializations from a given base.

But those are design problems that could be fixed in a DocBook V6 or something if it was important or useful to do so.

Finally, note that in DITA 2.0 there is the expectation that the specialization facility will be reengineered from scratch. That would be the ideal opportunity to work jointly to develop a specialization mechanism that satisfied requirements beyond those specifically brought by DITA. In particular, any new mechanism needs to play well with namespaces, which the current DITA mechanism does not (but note that it was designed before namespaces were standardized).

Monday, August 09, 2010

Worse is Better, or Is It?

At the just-concluded Balisage conference, Michael Sperberg-McQueen brought up the (apparently) famous "worse is better" essay by Richard P. Gabriel (Wikipedia entry here, original paper here). I had never heard of this (or at least had no memory of ever hearing of it) even though it is directly relevant to my experiences as a standard developer and engineer, where I've done things in both the "MIT" way (correctness is most important) and, more or less, the "New Jersey" way (simplicity is most important). I was actually very surprised that nobody had ever pointed me to it before.

Gabriel's original argument is essentially that software that chooses simplicity over correctness and completeness has better survivability for a number of reasons, and cites as a prime example Unix and C, which spread precisely because they were simple (and thus easy to port) in spite of being neither complete functionally nor consistent in terms of their interfaces (user or programming). Gabriel then goes on, over the years, to argue against his own original assertion that worse is better and essentially falls into a state of oscillation between "yes it is" and "no it isn't" (see his history of his thought here).

The concept of "worse is better" certainly resonated with me because I have, for most of my career, fought against it at every turn, insisting on correctness and completeness as the primary concerns. This is in some part because of my work in standards, where correctness is of course important, and in part because I'm inherently an idealist by inclination, and in part because I grew up in IBM in the 80's when a company like IBM could still afford the time and cost of correctness over simplicity (or thought it could).

XML largely broke me of that. I was very humbled by XML and the general "80% is good enough" approach of the W3C and the Web in general. It took me a long time to get over my anger at the fact that they were right because I didn't want to live in that world, a world where <a href/> was the height of hyperlinking sophistication.

I got over it.

Around 1999 I started working as part of a pure Extreme Programming team implementing a content management system based on a simple but powerful abstract model (the SnapCM model I've posted about here in the past) and implemented using iterative, requirements-driven processes. We were very successful, in that we implemented exactly what we wanted to, in a timely fashion and with all the performance characteristics we needed, and without sacrificing any essential aspects of the design for the sake of simplicity of implementation or any other form of expediency.

That experience convinced me that agile methods, as typified by Extreme Programming, are very effective, if not the most effective engineering approach. But it also taught me the value of good abstract models, that they ensure consistency of purpose and implementation and allow you to have both simplicity of implementation and consistency of interface, that one need not be sacrificed for the other if you can do a bit of advanced planning (but not too much--that's another lesson of agile methods).

Thinking then about "worse is better" and Gabriel's inability to decide conclusively if it is actually better got me to thinking and the conclusion I came to is that the reason Gabriel can't decide is because both sides of his dichotomy are in fact wrong.

Extreme Programming says "start with the simplest thing that could possibly work" (italics mine). This is not the same as saying "simplicity trumps correctness", it just says "start simple". You then iterate until your tests pass. The tests reflect documented and verified user requirements.

The "worse is better" approach as defined by Gabriel is similar in that it also involves iteration but it largely ignores requirements. That is, in the New Jersey approach, "finished" is defined by the implementors with no obvious reference to any objective test of whether they are in fact finished.

At the same time, the MIT approach falls into the trap that agile methods are designed explicitly to avoid, namely overplanning and implementation of features that may never be used.

That is, it is easy, as an engineer or analyst who has thought deeply about a particular problem domain, to think of all the things that could be needed or useful and then design a system that will provide them, and then proceed to implement it. In this model, "done" is defined by "all aspects of the previously-specified design are implemented", again with no direct reference to actual validated requirements (except to the degree the designer asserts her authority that her analysis is correct). [The HyTime standard is an example of this approach to system design. I am proud of HyTime as an exercise in design that is mathematically complete and correct with respect to its problem domain. I am not proud of it as an example of survivable design. The fact that the existence of XML and the rise of the Web largely made HyTime irrelevant does not bother me particularly because I see now that it could never have survived. It was a dinosaur: well-adapted to its original environment, large and powerful and completely ill adapted to a rapidly changing environment. I learned and moved on. I am gratified only to the degree that no new hyperlinking standard, with the possible exception of DITA 1.2+, has come anywhere close to providing the needed level of standardization of hyperlinking that HyTime provided. It's a hard problem, one where the minimum level of simplicity needed to satisfy base requirements is still dauntingly challenging.]

Thus both the MIT and New Jersey approaches ultimately fail because they are not directly requirements driven in the way that agile methods are and must be.

Or put another way, the MIT approach reflects the failure of overplanning and the New Jersey approach reflects the failure of underplanning.

Agile methods, as typified by Extreme Programming, attempt to solve the problem by doing just the right amount of planning, and no more, and that planning is primarily a function of requirements gathering and validation in the support of iteration.

To that degree, agile engineering is much closer to the worse is better approach, in that it necessarily prefers simplicity over completeness and it tends, by its start-small-and-iterate approach, to produce smaller solutions faster than a planning-heavy approach will.

Because of the way projects tend to go, where budgets get exhausted or users get bogged down in just getting the usual stuff done or technology or the business changes in the meantime, it often happens that more sophisticated or future-looking requirements never get implemented because the project simply never gets that far. This has the effect of making agile projects look, after the fact, very much like worse-is-better projects simply because informed observers can see obvious features that haven't been implemented. Without knowing the project history you can't tell if the feature holes are there because the implementors refused to implement them on the grounds of preserving simplicity or because they simply fell off the bottom of the last iteration plan.

Whether an agile project ends with a greater degree of consistency in interface is entirely a function of engineering quality but it is at least the case that agile projects need not sacrifice consistency as long as the appropriate amount of planning was done, and in particular, a solid, universally-understood data or system model was defined as part of the initial implementation activity.

At the time Unix was implemented the practice of software and data modeling was still nascent at best and implementation was hard enough. Today we have deep established practice of software models, we have well-established design patterns, we have useful tools for capturing and publishing designs, so there is no excuse for not having one for any non-trivial project.

To that degree, I would hope that the "worse is more" engineering practice typified by Unix and C is a thing of the past. We now have enough counterexamples of good design with simplest-possible implementation and very consistent interfaces (Python, Groovy, Java, XSLT, and XQuery all come to mind, although I'm sure there are many many more).

But Michael's purpose in presenting worse-is-better was primarily as it relates to standards and I think the point is still well taken--standards have value only to the degree they are adopted, that is to the degree they survive in the Darwinian sense. Worse is more definitely tells us that simplicity is a powerful survival characteristic--we saw that with XML relative to SGML and with XSLT relative to DSSSL. Of course, it is not the only survival characteristic and is not sufficient, by itself, to ensure survival. But it's a very important one.

As somebody involved in the DITA standard development, I certainly take it to heart.

My thanks to Michael for helping me to think again about the value of simplicity.