Trip Report: Tekom 2015, DITA vs Walled Garden CCMS Systems
is one of, if not the, largest technical documentation conferences in
Europe. Several of us from the DITA community were invited to speak,
including Kris Eberlein, Keith Schengili-Roberts, Jang Graat, Scott
Prentice, and Sarah O'Keefe. This is the second year that Tekom has had
dedicated DITA presentations, reflecting the trend of increasing use of
and interest in DITA in Europe.
DITA vs. Not-DITAThe theme for me this year was "DITA vs German CCMS systems".
For background, Germany has had several SGML- and XML-based component
content management system products available since the mid 90's, of which
Schema is probably the best known. These systems use their own XML models
and are highly optimized for the needs of the German machinery industry.
They are basically walled garden CCMS systems. These are solid tools that
provide a rich set of features. But they are not necessarily generalized
XML content management systems capable of working easily with any XML
vocabulary. These products are widely deployed in Germany and other
DITA poses a problem for these products to the degree that they are not
able to directly support DITA markup internally, for whatever reason,
e.g., having been architected around a specific XML model such that
supporting other models is difficult.
So there is a clear and understandable tension between the vendors and
happy users of these products and the adoption of DITA. Evidence of this
tension is the creation of the DERCOM association
(http://www.dercom.de/en/dercom-home), which is, at least in part, a
banding together of the German CCMS vendors against DITA in general, as
evidenced by the document "Content Management and Struktured Authoring in
Technical Communication - A progress report", which says a number of
incorrect or misleading things about DITA as a technology.
The first DITA presentation of the conference was "5 Reasons Not to Use
DITA from a CCMS Perspective" by Marcus Kesseler, one of the founders of
It was an entertaining presentation with some heated discussion but the
presentation itself was a pretty transparent attempt to spread fear,
uncertainty, and doubt (FUD) about DITA by using false dichotomies and
category errors to make DITA look particularly bad. This was unfortunate
because Herr Kesseler had a valid point, which came out in the discussion
at the end of his talk, which is that consultants were insisting that if
his product (Schema, and by extension the other CCMS systems like Schema)
could not do DITA to a fairly deep degree internally then they were
unacceptable, regardless of any other useful functionality they might
This is definitely a problem in that taking this sort of purist attitude
to authoring support tools is simply not appropriate or productive. While
we might argue architectural choices or implementation design options as a
technical discussion (and goodness knows I have over the years), it is not
appropriate to reject a tool simply because it is not DITA under the
covers. In particular, if a system can take DITA in and produce DITA back
out with appropriate fidelity, it doesn't really matter what it does under
the covers. Now whether tools like Schema can, today, import and export
the DITA you require is a separate question, something that one should
evaluate as part of qualifying a system as suited to task. But certainly
there's no technical barrier to these tools doing good DITA import and
export if it is in fact true, as claimed, that what they do internally is
*functionally* equivalent to DITA, which it may very well be.
In further discussions with Marcus and others I made the point that DITA
is first and foremost about interchange and interoperation and in that
role it has clear value to everyone as a standard and common vehicle for
interchange. To the degree that DERCOM, for example, is trying to define a
standard for interoperation and interchange among CCMS systems, DITA can
offer some value there.
I also had some discussions with writers faced with DITA--some
enthusiastic about it, some not--who were frustrated by the difficulty of
doing what they needed using the usual DITA tools as compared to the
highly-polished and mature features provided by systems like Schema. This
frustration is completely understandable--we've all experienced it. But it
is clearly a real challenge that German and, more generally, European
writing teams face as they adopt or consider adopting DITA and it's
something we need to take seriously.
One aspect of this environment is that people do not separate DITA The
Standard from the tools that support the use of DITA precisely because
they've had these all-singing, all-dancing CCMS systems where the XML
details are really secondary.
A DITA-based world puts the focus on the XML details, with tools being a
secondary concern. This leads to a mismatch of expectations that naturally
leads to frustration and misunderstanding. When people say things like
"The Open Toolkit doesn't do everything my non-DITA CCMS does" you know
there is an education problem.
This aspect of the European market for DITA needs attention from the DITA
community and from DITA tool vendors. I urged the writers I talked to to
talk to the DITA CCMS vendors to help them understand their specific
requirements, the things tools like Schema provide that they really value
(for example, one writer talked about support for creating sophisticated
links from graphics, an important aspect of machinery documentation but
not a DITA-specific requirement per-se). I also urged Marcus to look to
us, the DITA community, for support when DITA consultants make
unreasonable demands on their products and emphasized the use of DITA for
interchange. I want us all to get along--there's no reason for there to be
a conflict between DITA and non-DITA and maintaining that dichotomy is not
going to help anyone in the long term.
Other TalksOn Wednesday there was a two-hour "Intelligent Information" panel
consisting of me, Kris Eberlein, Markus Kesseler from Schema, and Torsten
Kuprat of Acolada, another CCMS vendor. Up until the end this was a
friendly discussion of intelligent information/intelligent content and
what it means, what it is and isn't, etc. At the end of the session we did
touch on the DITA vs. non-DITA arguments but avoided getting too
argumentative. But Kris and I both tried to push on the
standard-for-interchange aspect of intelligent content and information.
This panel was opposite a couple of other DITA presentations so I was
unable to attend those.
Keith Shengili-Roberts presented on the trends of DITA adoption around the
world, which was pretty interesting. While his data sources are crude at
best (LinkedIn profiles and job postings as well as self-reported DITA
usage) he clearly demonstrated a positive trend in DITA adoption around
the world and in Europe. I thought it was a nice counter to the
presentations of the day before.
Frank Ralf and Constant Gordon presented NXP's use of DITA and how they've
applied it to the general challenges of semiconductor documentation
management and production. It was a nice high-level discussion of what a
DITA-based system looks like and how such a system can evolve over time,
as well as some of the practical challenges faced.
My talk was on why cross-book links in legacy formats like DocBook and
Framemaker break when you migrate those documents to DITA: "They Worked
Before, What Happened? Understanding DITA Cross-Book Links"
anding-dita-crossbook-links). (Short version: you have to use the new
cross-deliverable linking features in DITA 1.3.)
George Bina presented on using custom Java URL handlers with custom URL
schemes to seamlessly convert non-XML formats into XML (DITA or otherwise)
in the context of editors like oXygenXML and processors like the DITA Open
Toolkit. He demonstrated treating things such as spreadsheets, Java class
files, and markdown documents as XML using URL references from otherwise
normal XML markup. Because the conversion is done by the URL handlers,
which are configured at the Java system level, the tools handling the XML
documents don't need to have any knowledge of the conversion tools. The
sample materials and instructions for using the custom "convert:" URL
scheme George has defined are available at
Wednesday's DITA events ended with a panel discussion on challenges faced
when moving to DITA, moderated by Sarah O'Keefe from Scriptorium and
featuring George Bina (Syncro Soft), Magda Caloian (Pantopix), and
Nolwenn Kezreho (IXIASOFT). It was an interesting discussion and
touched on some of the same tools and expectation mismatches discussed
On Thursday, Jang Graat gave a tutorial titled "The DITA Diet": using DITA
configuration and constraints to tailor your use of DITA to eliminate the
elements you don't need. He also demonstrated a FrameMaker utility he's
developed that makes it easy to tailor DITA EDDs to reflect the
configuration and constraints you want.
Also on Thursday was the German-language version of the intelligent
content panel, with Sarah O'Keefe from Scriptorium representing the consultant
role. I was not present so can't report on what was said.
Tool and Service VendorsOne interesting new tool I saw (in fact the only new product I saw) was
the Miramo formatter Open Toolkit plugin, which is currently free for
personal use. It is a (currently) Windows-only formatter that competes
with products like Antenna House XSL Formatter and RenderX XEP. It is not
an FO implementation but offers comparable batch composition features. It
comes with a visual design tool that makes it easy to set up and modify
the composition details. This could be a nice alternative to hacking the
PDF2 transform. The server version price is comparable to the Antenna
House and XEP products. The tool is available at http://www.miramo.com. I
haven't had a chance yet to evaluate it but I plan to. I emphasized the
value of having it run on other platforms and the Miramo represented
thought it would be possible for them to support other platforms without
too much effort.
Adobe had their usual big booth, highlighting Framemaker 2015 with it's
new DITA 1.3 features. Syncro Soft had a bigger and more prominent booth
for oXygenXML. FontoXML had their booth and I think there was another
Web-based XML/DITA editor present but I didn't have a chance to talk to
Of the usual DITA CCMS vendors, IXIASOFT was the only one at the
conference (at least that I noticed). SDL had a big booth but they
appeared to be focusing on their localization and translation products,
not on their CMS system.
I think the mix of vendors reflects a settling out of the DITA technology
offerings as the DITA products mature. The same thing happened in the
early days of XML. It will be interesting to see who is also at DITA
Europe next week.
SummaryAll-in-all I thought Tekom was a good conference for me--I learned a lot
about the state of DITA adoption and support in Europe generally and
Germany specifically. I feel like I have a deeper understanding of the
challenges that both writers and tool vendors face as DITA gets wider
acceptance. Hopefully we can help resolve some of the DITA vs. not-DITA
tension evident at the conference. I got to talk to a lot of different
people as well as catch up with friends I only see at conferences (Kris
Eberlein and Sarah O'Keefe were joking about how, while they both live in
the same city, they only see each other at this conference).
It's clear to me that DITA is gaining traction in Europe and, slowly, in
Germany but that the DITA CCMS vendors will need to step up their game if
they want to compete head-to-head against entrenched systems like Schema
and Acolada. Likewise, the DITA community needs to do a better job of
educating both tools vendors and potential DITA users if we expect them to
be both accepting of DITA and successful in their attempts to implement
and use it.
I'm looking forward to next year. Hopefully the discussion around DITA
will be a little less contentious than this year.