Attracted by virtual constructs

June 20, 2015

Calligra’s 2nd port to Qt5 & KF5 slowly progressing

Filed under: Calligra,KDE — by frinring @ 12:47 am

There is a lot of nice stuff to do in northern hemisphere currently, given it is Summer time here. And porting is less nice to do, so many problems to solve with no immediate gain usually :)
Thus things are moving slowly in the on-going port of Calligra to Qt5/KF5 (cmp. first success at begin of April).

But they are moving, at least.

A few libraries have been split off from Calligra into own repos (more on that in another post), and one is around report generation. This week Calliga Plan, the project management application, could be made working with it again, here a preview (reports not yet added to build in the “frameworks” branch, still waiting for API of the now external library to become stable):
PlanReportDesign

The busy bees behind Krita, the sketching and painting application, all currently are still buzzing around improving the Qt4/kdelibs4 version in the “calligra/2.9″ branch, with new features on a sometimes daily basis :) To not lose any improvements there while at the same time the port is going on, currently the policy is that only changes as minimal as needed should be done in the “frameworks” branch, where the port to Qt5/KF5 is happening, to allow simple merging of the “calligra/2.9″ branch. The merges are done roughly on a weekly basis, and so far no complicated or unsolvable conflicts were met, also thanks to Qt5 & KF5 with kdelibs4support offering good source compatibility (and what is perfect ;) ).
Surely this cannot go on forever. Once all regressions in the “frameworks” branch compared to the “calligra/2.9″ branch are as good as fixed (with the currently ported programs, some might be lost due to lack of maintainers), this will be declared a milestone: developer focus will switch and feature development then will happen in the “frameworks” branch, which then also might already have become “master”. Not yet completely decided on. And after that we should soon see finally the first released version of Calligra based on Qt5 & KF5.
(BTW, Kexi, the visual database applications creator, has gone for a different porting story, porting directly to kdelibs4support-free code, thus sadly currently only in a branch separate from the “frameworks” branch, until the milestone mentioned before will be reached).

Still now and then a few things are done also for the port of Krita. And on a first glance (if having turned off OpenGL usage), things start to look promising:
krita

So, still not there, still a lot to do, but slowly getting somewhere :) And as usual you are invited to join the efforts.

April 17, 2015

Like Braindump? Adopt it for the Qt5/KF5 port!

Filed under: Calligra,KDE — by frinring @ 9:23 am

As you might know, Calligra now also started porting to Qt5/KF5. We are currently reaching the end of stage 1, where everything is readded to the build (“links and installs? done!”), with next stage then to fix anything that broke or regressed (see screenshots!1!).

Just, we also now see that noone in the current set of active Calligra developers has enough love left for Braindump, the notetaking and mindmapping application from the creativity and productivity suite Calligra.

So as it stands Braindump would be left behind during the porting phase and be discontinued, for now at least :(

hi256-app-braindump

Braindump is a nice example for the flexibility of the Calligra architecture, where objects are implemented by so called “Shape” plugins, which then are available to any application supporting “Shape”s in general. The actual code of Braindump itself is centered around the concept of whiteboards with unlimited canvas, where one can drop all possible kind of objects (the “shapes”) and then note their relations. With automated saving in the background, no need for any “Save” button.

See this older video to get an idea of the possibilities:
braindumpinaction

Cyrille, who has developed Braindump, says:

“I am still interested in the application itself, but what it really needs is a better user interaction, and flake [name of the Shape system, ed.] is not flexible enough to provide it, and I don’t have the energy to make it flexible enough”.

He and the rest of the Calligra team will happily assist someone who ideally already uses Braindump and now would like to overtake the future development for the Qt5/KF5 based version, to enhance their workhorse. And the porting time is a good time to get to know the current system: for the first Qt5/KF5 based Calligra release, 3.0, we are concentrating on a pure port, so no new features or refactoring (ignore the exceptions ;) ), only minimal changes. And envisions the options after the port/3.0: e.g. get Braindump to run on your Android or Sailfish OS tablet! Connect it to syncing servers like ownCloud! Or whatever would enhance your Braindump usage.
And all done while enjoying the synergy effects from the shared libs and plugins of the Calligra suite.

Your chance, now :) Don’t hesitate too long, as Braindump will bitrot more and more, once the 3.0 release is done and the Calligra libs will see more refactoring.

Find us in the channel #calligra on irc.freenode.net, or join the mailing-list calligra-devel@kde.org.

April 10, 2015

One year old: Document Liberation Project

Filed under: Calligra,KDE,Okteta — by frinring @ 8:22 pm

On the list of projects-I-would-like-to-contribute-to-but-no-time-yet it is one of the top ones: the Document Liberation Project. There are quite some files from old times on my storage devices whose content is locked away in binary blobs that act like safes whose keys got lost with the software that created the files. So it’s easy to guess how I feel towards such initiatives, allowing me to regain access to my very own data :)

The Document Liberation Project only was founded last year officially and now can see at least it’s first birthday. Not yet picked up much steam from new contributors so far, but then already serving e.g. users of Calligra, with libraries like LibRevenge, LibOdfGen, LibWpd, LibWpg, LibWps, LibVisio, LibEtonyek etc., to read in data from files in WordPerfect, MS Works, MS Visio, and Keynote formats.

Once the port of Calligra to Qt5/KF5 and thus version 3.0 is done, I hope to finally pick up the work (see here and here) on being able to read my old Corel Draw v4 files with Karbon or Flow. Which these days surely means using LibCDR from the Document Liberation Project, instead of my own custom code. Perhaps I will then also be able to contribute a little to the project finally :)

While talking about that, another related thing still waiting for implementation is extending the hex editor Okteta to support the binary format grammar that I developed during the writing of my CDR import code, so Okteta’s Structures tool would be able to read in the grammar and then show the content structure. Or a combination of that grammar and the one used by msoscheme, which is used for some of Calligra’s MS format import filters, which I learned about in the meantime.
Having a standardized grammar for binary formats, which can be both used by data inspection tools like hex editors, but also for code generation, surely will be good to have. There are already some related tools also created/used by the Document Liberation Project, something to look at for more synergy effects.

Hm, filled TODO lists, but winter time with it’s long nights is over now. Too bad.

April 8, 2015

First success in Calligra’s 2nd port to Qt5 & KF5

Filed under: Calligra,KDE — by frinring @ 11:17 pm

Last month, in March, with the 2.9.0 release done, we Calligra developers followed our plans and started a branch named “frameworks”, to work on version 3.0, to be the first version based on Qt5 and KDE Frameworks 5. Calligra 3.0 should not see any new features, the focus is purely on getting the port to the new platform done without any regressions.

In a first phase we are currently trying to readd any libs, apps and plugins back to the build, adding TODOs for the second phase where simple changes are not enough.

A week ago finally we managed to get the first apps to start, e.g. Stage, Words and Sheets (somehow Oxygen icons still sneak into the UI for me):

firststartstagebreezestyle

firststartbreezestyle

firststartsheetsbreezestyle

A few things are still left to be added back to the build, before the next phase can be entered, where anything broken will be fixed. Some things are even already nicely working, e.g. the Calligra ODT plugin for Okular together with the Calligra DOC import filter make the (WIP and not yet released) Qt5/KF5-based Okular show a DOC sample file, like before with the Qt4-based versions:

firststartokulardocbreezestyle

“2nd port”, you might wonder? It happened that (some part of) Calligra was already once ported to Qt5, in 2013, to power the Documents app on the Jolla phone. Just it also happened to be unlucky timing, as the KF5 were only starting to exist, also was the Sailfish OS still at Qt 5.1 at that time, while KF5 was needing at least Qt 5.2. So that first port ended up as a dead branch. Still, some experiences made during the port had influence on changes to the Qt4 branches, and even a few of the old commits could be now cherry-picked and applied unchanged to the “frameworks” branch :)

No idea when Calligra 3.0 is good to be released. It’s done when it is done. We will work hard to make it done this year. In Q2 would be nice. In Q3 might be more realistic. Now, join the fun :)

April 1, 2015

First success in Calligra’s port to own Qt3-fork Cat

Filed under: Calligra,KDE — by frinring @ 9:18 am

((Just for the record: this post was valid only on the day it got posted.))

A few weeks ago the Calligra developers started to look into the port to Qt5 and the new KDE Frameworks 5. But it soon became obvious that this new world is just a mess, with lots of dependencies. Just look at the latest draft of the buildsystem for all the stuff that is now needed:


find_package(KF5 5.7.0 REQUIRED COMPONENTS Archive Codecs Config CoreAddons
GuiAddons I18n ItemModels ItemViews
WidgetsAddons ThreadWeaver
Completion IconThemes Sonnet
Parts
XmlGui Kross Wallet
Emoticons ConfigWidgets KDELibs4Support
OPTIONAL_COMPONENTS
Activities
Declarative
)

find_package(Qt5 5.2.0 REQUIRED COMPONENTS Core Gui Widgets Xml PrintSupport Script Svg Test Concurrent)
find_package(Qt5 5.2.0 COMPONENTS WebKit WebKitWidgets DBus Declarative X11Extras)

This must be the influence of all the new web technologies. As the Calligra developers see the good old desktop based computers as their main target target it was soon decided to go back to the old roots: a fork of Qt3 was created, named Cat, which will be optimized for Calligra.

And the work has gone amazingly quickly so far. Last night it was the first time possible to start the ported Stage and Words applications:

firststartstage

firststartwords

So look out for a new fast and slim Cat-based Calligra suite being soon available to assist your productivity and creativity!

September 5, 2014

Workspace-wide services on non-file objects

Filed under: KDE — by frinring @ 10:28 pm

As a user…

Have you ever copied some text from e.g. Okular, KMail or LibreOffice to Plasma KRunner, to invoke some service on it, ideally based on auto-recognition of the data? And wished, you could just have already got in the context menu on the selected text the respective service you were going for?
Or have looked in the context menu of an image in a PDF, a website in Firefox or a database in Kexi and wondered why the context menu does not show at least the “Send to” services from the Kipi plugins?

As a developer…

Have you ever written a parser for plain text which detects certain things like urls or telephone numbers, then tags those text parts, to be able to highlight them and to offer certain actions on them? Only to find out that other programs are better in detection, for more things, and offer more or other services on those, at least that other program in its new release when you just aligned yours with their old?

If so, then we share some frustration. And an itch to scratch :)

Workspace-wide services on non-file objects

So what I would like to propose and do is a workspace-wide service system. Actually two.

The first system would make potentially all services on objects available everywhere, based on the mimetypes the program can support on export (e.g. the ones it would offer for the object to the clipboard on copy). It would also allow 3rd-parties to add new services without touching any existing programs.

The second system would make all object recognition logic available to all programs. And be extendable by 3rd-party as well without touching existing programs.

Because, why only deal with objects in the filesystem (blobs of bytes commonly called “files” ;) ) in a generic way? Why not also with objects in the composed object structures the programs have made up at runtime in the working memory and which the user can clearly address as objects in the UI?

Of course this needs to be properly done, so we do not end up with crowdy and surely improvable menus (e.g. like IMHO the “Send to…” menu in KSnapshot). For that I am happy that in the next days at Akademy the good people from the Visual Design Group are willing to offer their input on what people come to them with… you will find me queueing up for them :)

Because…
I'm going to Akademy

Data recognition system

Often data is not completely enriched with all possible semantics, there is a final enrichment done only by a human looking at the presentation of the data. E.g.

  • items in a picture (like a cat, a flower or a QR code)
  • items in some plain text (like a phone number or the name of a person)
  • items in some partially enriched text (like an email address in a comment in source code)

Or think about items in a sound, while not that typically presented in spatial way on a screen, still there is data recognition going on there as well, like a spoken word, barking or a speaker (or a dog, if you are into dogs :) ).

Some programs have some hardcoded data recognition system, e.g. Digikam for faces of humans, Konsole for urls in console output, KMail for urls and email addresses. Their code is not shared with other programs, everyone would have to reimplement it. Kate and Okteta would have to write their own url detection code, even Rekonq, Okular and Calligra, for text not yet marked-up as url. And Gwenview will have to do its own thing for face detection.

So I imagine a set of globally installed data recognition plugins which can be called on some given data and would report where they detected which objects. They would also mark objects with a state, like just a guess or sure thing, and if there is one or multiple options for the semantic (e.g. for non-unique names of contacts matched in the addressbook).

For text, here a list of things that could be detected in plain text and where you surely can imagine some services on: geocoordinates, date, time, phone number, url, email address, irc/chat nickname, irc channel, name of person, calculation, currency amount, value with physical unit, RGB value, abbreviation, identifying names of objects (like cities, countries, buildings, satellites), program name, you-name-it…

For many of these there are already recognition parsers in Plasma KRunners (even for geocoordinates with the Marble Plasma Runner). Time to share them with the whole system!

Services system

Many of the services I think of are those you can already find offered by the Plasma KRunners: doing some action based on some data provided.
Now the system should be able more than that, I would like to have these four kind of service types:
* action based on data (read-only with regard to the original data)
* manipulating action based on data (data returning a substitute for the original data)
* action based on data combined with other data (e.g. triggered by drag’n’drop)
* manipulating action based on data combined with other data

When querying for services, the possible mimetypes of the data should be passed (like with clipboard). For some of the mentioned things above this will mean newly invented mimetypes (e.g. for irc nickname or value with physical unit), but this seems okay. Some services will want to inspect the actual data to see if they do support something. Also will context & some metadata information (like the container) be helpful as well (e.g. for a translation service). Some services are cheap/okay to be queried for support/run as often as wanted, some are not (e.g. public web services run by private). Some services can be data-risky (do profiling by the seen data or risk lacking private info). All that should be accounted for in some way.

Some semantics of the services will be needed, to assist in presentation in the UI (e.g. “send copy of data somewhere”, “show info about data”, etc.)

Programs would install context files, which could be used to configure when to offer which services (done by whitelist/blacklist of services). The UI should offer typically used services in quickly accessible/discoverable ways (like direct items in the context menu).

Perhaps there is even a fifth kind of service possible, something that feeds the tooltip or some infobox with data about the object (like a business card for person from addressbook or a map for a location).

All this should allow services like “Offer translation”, “Alternative word proposal”, “Correction proposals”, “Look up in Wikipedia/knowledge db and show mini info card”, “do calculation” (on data of type formular-data), “Convert to other unit” (on data of type value with unit), “Start program”, “Open file”, “Show color”, “Look for offers in internet shop”, you-get-the-idea.

This service system might be similar to something done in NeXTSTEP, at least I remember having read about that one day. And Android also possibly features something similar, from what I understood. If have you pointers to details about those, and other similar systems, please post them in the comments, so the concepts could be looked at and learned from as well. I still need to any research on pre-existing concepts, currently still busy with designing this proposal itself some more.

Ideally these systems are done with cross-desktop orientation in mind. At least for the services that should be doable, as service registration and service execution could be done via the abstraction layers of D-Bus, so the actual implementation does not matter. For the data recognition system I am not so sure yet, as multiple plugins all getting full data copies passed to do their special recognition on sounds rather heavy. No idea how shared memory would help here without introducing other problems?

Please give your input in the comments below, interested what you think of this.
I hope to also find a place for a BoF here at Akademy, for some proper feedback on the plan and hopefully implementation helpers :)

July 28, 2014

WebODF easily used, part 1: ViewerJS

Filed under: KDE,WebODF — by frinring @ 3:03 am

WebODFYou possibly have heard of WebODF already, the Open Source JavaScript library for displaying and editing files in the OpenDocument format (ODF) inside HTML pages. For ideas what is possible with WebODF and currently going on, see e.g. Aditya’s great blog posts about the usage of WebODF in OwnCloud Documents and Highlights in the WebODF 0.5 release.

The WebODF library webodf.js comes with a rich API and lots of abstraction layers to allow adaption to different backends and enviroments. There is an increasing number of software using WebODF, some of that listed here.

Those which are interested in the capabilities of WebODF, without needing a custom and highly integrated solution, can additionally go for ready-made simple-to-use components based on WebODF. This blog post is the first of a series to introduce you to those. It starts with the component that gives you embedded display of OpenDocument format files, that is text documents (ODT), presentation slides (ODP) and spreadsheets (ODS), in webpages by just a single (sic!*) line of HTML code:
* no-one would add a line-break there ;)

ViewerJS

ViewerJS is an Open Source document viewer that enables embedded display of ODF or PDF files directly in webpages, without any external server dependencies, just done with HTML, CSS and Javascript. It uses WebODF to display files in the OpenDocument format and PDF.js for files in the PDF format.

Deploying and using ViewerJS with your webpages can be done in a few minutes. Follow this guide and see yourself!

Quickly Added

Start with looking at the current time and noting it.

As example file take an ODP of your choice, otherwise let’s use the slides from a talk at KDE’s Akademy in 2013, akademy2013-ODF-in-KDE-Calligra-WebODF.odp.

If you do not have a webserver handy, create a mini one locally on your system:


# Create a folder structure to serve statically
mkdir htroot

# Put the sample ODP file into htroot, renamed as "example.odp"
cp akademy2013-ODF-in-KDE-Calligra-WebODF.odp htroot/example.odp

# Add a simple html file:
touch htroot/example.html

Open example.html in an editor and have this as content:

<!DOCTYPE HTML>
<html>
  <head>
    <title>example.odp</title>
  </head>
  <body>
    <div>We got an ODP file.</div>
    <div>Would be nice to show it here.</div>
  </body>
</html>

Start a simple webserver program serving that directory, e.g. the one built into Python. For that open a separate console and do:


cd htroot
python -m SimpleHTTPServer

example.odp not embeddedNow browse to http://127.0.0.1:8000/example.html and make sure you see that HTML file.

The ODP file example.odp is not displayed yet, right. Not so nice.

Let’s change that and deploy ViewerJS for it.

In the first console now do:


# Download http://viewerjs.org/releases/viewerjs-0.5.2.zip
# (check if there is a newer version perhaps, then change
# all "0.5.2" below to the new version string)
wget http://viewerjs.org/releases/viewerjs-0.5.2.zip

# Unzip the file
unzip viewerjs-0.5.2.zip

# Move the folder "ViewerJS" to the toplevel dir of
# the folder structure statically served by the webserver
# (could also be a non-toplevel dir)
mv viewerjs-0.5.2/ViewerJS htroot

Now replace the “Would be nice to show it here.” in the example.html with this code (remove the REMOVEME, workaround to strange WordPress behaviour):

<REMOVEMEiframe id="viewer" src="/ViewerJS/#../example.odp" width='400' height='300' allowfullscreen webkitallowfullscreen></iframe>

(in the sources one line, as promised. But add line-breaks as you like ;) )

example.odp embedded with ViewerJSNow reload http://127.0.0.1:8000/example.html in your browser. And if everything worked out, you see the ODP file now embedded in the webpage, ready to be read or e.g. presented fullscreen.

Look again at the current time. How minutes did you need? :)

ODF or PDF

For publishing done documents that should be only read and not further processed, PDF is the better choice IMHO, because the format specifies the exact positioning of everything.
ODF (same with similar formats like OOXML) leaves the actual fine-layout to the program displaying/printing the document, which can differ between computer systems and setups, usually due to the used font engine. This makes sense, as it allows to create ODF files from code that has no clue about layout calculations, e.g. some Perl script generating a report. But it can result in frustrations if some document with manually optimized layout gets differently layout-ed elsewhere.

Thanks to PDF.js ViewerJS can also nicely display PDFs, so use whatever format suits the needs, be it preview of some document to further process or display of the final result.

Take a PDF file and change the above example to show that instead of the ODP file. Then try also with an ODT or ODS file.

Getting better week by week

The developers of WebODF are constantly enhancing its coverage of the ODF spec. See how the slides template for this year’s GUADEC (of course done in ODP :) ) are almost looking the same in LibreOffice and ViewerJS (v0.5.2):
GUADEC2015SlideDesign in LibreOfficeGUADEC2014SlideDesign in ViewerJS

Currently the Wiki hosting the GUADEC slide templates still has to say:

Current configuration does not allow embedding of the file lightning_talks.odp because of its mimetype application/vnd.oasis.opendocument.presentation

ViewerJS and WebODF hopefully can be a reason to change that soon :)

When giving talks about WebODF of course ODPs and ViewerJS are used. Knowing the pitfalls the slides can be done avoiding those. Still many real-life samples not designed for current WebODF capabilities are increasingly well displayed, also e.g.
050 in LibreOffice050 in ViewerJS
or
MCT in LibreOfficeMCT in ViewerJS

In general are ODF documents with only formatted text and images in SVG, PNG, JPEG or similar no problem for WebODF and thus ViewerJS. But as can be seen next, e.g. native ODF graphic elements are still a TODO (and the result not related to any censoring code ;) ). But, the display is already good enough for a “preview” :) :
DLP in LibreOfficeDLP in ViewerJS

BTW, if you are browsing a website that does not yet use ViewerJS to display ODF files embedded but only provides them as links, there is another WebODF-based option for Firefox users: the ODF viewer Firefox Add-on, that allows viewing ODF documents directly in Firefox on any device, without the need of a (big) office suite.

More on ViewerJS.org

Learn more about ViewerJS on the website ViewerJS.org, e.g. how to support non-embedded custom fonts. Discover the ViewerJS plugin for WordPress. Think about how you and your websites could make use of ViewerJS and how you could help to improve ViewerJS and WebODF, and then contact the ViewerJS and WebODF developers about that! They are looking forward to working together with you as well.

July 5, 2014

Calligra sprint in full process

Filed under: Calligra,KDE — by frinring @ 12:13 pm

One week-end of Calligra sprint is currently going on in the old and cozy center of Deventer in the Netherlands.
Yesterday everyone safely arrived, and have been in discussions since then… Right now we are all sitting in the livingroom of Boudewijn and Irina (who are being great hosts to the sprint) around the coffee table, everyone a laptop on their lap, in after-lunch digesting mode, with the full discussions to be continued now every minute.

June 18, 2014

Thumbnails & previews for your Geo Data files (KML, GPX, OSM, …)

Filed under: Calligra,KDE,Marble — by frinring @ 10:02 pm

Almost two years ago (uh, already?) I went to Prague for a developer sprint of the team of Marble, the virtual globe and world atlas. My goal was to work on a proper maps shape plugin for Calligra.

We are ready to render, or? … Or?

Just, it showed that Marble at that time was not properly supporting the needs such a plugin has. Biggest showstopper was that Marble did not expose the information if and when all external data needed for rendering was available. Think e.g. of map pieces (so called tiles) still having to be downloaded from a server like OpenStreetMap‘s one. Not knowing the state is not good e.g. if a document with a map is to be rendered to a real printer.

Telling the File Manager some more about Marble and Geo Data Files

At that sprint then I worked instead on making sure that there are proper mime-types registered for the types of geo data files that Marble can read and display (like OSM data files, ESRI shapefiles and GPX files). So that such files are nicely displayed in the file manager with matching icon & type description and can get Marble assigned as handler. And:

[…]
Next step is to finish the thumbnail plugin, so one can also have nice previews of the content of these filetypes.
[…]

The thumbnailer was started in the train back from Prague, but Dresden was arrived before it was properly working. Then… the code went out of focus and mind, that step frozen in the middle of the air…

Getting ready to render!

These days, Calligra still has no maps shape plugin. Now and then I nag^Wbeg the Marble developers for the issue (how lame, indeed), as there are other interesting use cases for rendering of map stills, like:

  • Creating a jigsaw puzzle for Palapeli, the jigsaw puzzle gaming program
  • Creating an animation for a movie in Kdenlive, the video editor
  • Yes, even creating the thumbnails for geo data files, if not using the simple installed map theme

Last week luckily earthwings found some personal interest to draft code for renderer state. Time to add a bit of momentum, by reviving the thumbnailer code, to serve as a use case. Still found in its old place, after some bit-rot cleanup and finally solving the problems of last time (different mindset, and things seem so obvious) it works now good enough for serious usage:

Thumbnails for Geo Data files

Thumbnails for Geo Data files

Some things need some more thinking (which map theme to use when, how to properly detect the sky object the data is for, how to deal with thumbnail generation in offline-state), but it’s a good first complete step :) It does not yet use the new render state feature, but should be soon.

So look forward to Marble 1.9, bringing previews for geo data files to your filemanager and in the file dialogs! (And do not forget to enable them in Dolphin > Settings > Configure Dolphin… > General > Previews, disabled for all types by default).

And perhaps, perhaps, one day we finally also will have a maps shape plugin for Calligra…

BTW, if you want to help development of Marble, other KDE Edu projects and more, consider to donate for enabling the Randa Meetings 2014. Every little amount will help, as it sums up. Best do it now! :)

June 17, 2014

Managing internal dependencies in a build of Calligra

Filed under: Calligra,KDE — by frinring @ 1:17 am

During the move of KDE’s software projects from Subversion to Git most projects split their subprojects over multiple Git repositories. Calligra did not, but is keeping all code of all apps and extras in one single repository. That is all of the apps Author, Braindump, Flow, Karbon, Kexi, Krita, Plan, Sheets, Stage and Words as well as all of the extras like the file format converter, the Okular generators, file thumbnailers and other file manager integration.
One of the reasons is that many libraries and plugins are shared among the different programs, and the API of the libraries is still changing a lot between releases. By having API-changing commits to be atomic to all of the Calligra code, all developers have less problems to have consistent revisions of the libs and programs.

All can be too much

A downside is: people interested in only one of the Calligra programs still have to get all of the Calligra code and also are faced with possibly having to build all of Calligra. Such people could be developers working only on e.g. Kexi, users wanting to only build the bleeding edge of their favourite program, e.g. Krita, or packagers/integrators only interested in e.g. viewer components for office file formats.

To support the different building needs, by the time more and more all kind of if(SOMEFLAG) [...] endif(SOMEFLAG) were added to random CMakeLists.txt files. And with additionally the conditional building due to optional external dependencies, things got complex. And thus sometimes broken.

Products, Features, and Product Sets

To get things more structured again, the concepts of “product”, “feature” and “product set” have been introduced to describe the stuff that gets build and their internal dependencies:

A “product” is the smallest functional unit which can be created in the build and which is useful on its own when installed. Examples are e.g. libraries, plugins or executables. Products have external and internal required dependencies at build-time. Internal dependencies are noted in terms of other products or features (see below) and could be e.g. other libraries to link against or build tools needed to generate source files. A product gets defined by setting an identifier, a descriptive fullname and the needed internal build-time requirements. Any other product or feature listed as requirement must have been defined before.

Example:

calligra_define_product(BUILDTOOL_RNG2CPP "rng2cpp")
calligra_define_product(LIB_CALLIGRA "Calligra core libs"  REQUIRES BUILDTOOL_RNG2CPP)

A “feature” is not a standalone product, but adds abilities to one or multiple given products. One examples is e.g. scriptability. Features have external and internal required dependencies at build-time. Internal dependencies are noted in terms of other products or features and could be e.g. other libraries to link against or build tools needed to generate source files. A feature gets defined by setting an identifier, a descriptive fullname and the needed internal build-time requirements. Any other product or feature listed as requirement must have been defined before.

Example:

calligra_define_feature(FEATURE_SCRIPTING "Scripting feature")

A “product set” is a selection of products which should be build together. The products can be either essential or optional to the set. If essential (REQUIRED), the whole product set will not be build if a product is missing another internal or external dependency. If optional (OPTIONAL), the rest of the set will still be build in that case.
The products to include in a set can be listed directly or indirectly: they can be named themselves, or another product set can be included in a set, whose products will then be part of the first set as well.
Products and product sets can be listed as dependencies in multiple product sets. As with dependencies for products, they must have been defined before.

Example:

calligra_define_productset(STAGE "Full Stage (for Desktop)"
    REQUIRES
        APP_STAGE
    OPTIONAL
        # extras
        FILEMANAGER
        # plugins
        PLUGIN_DEFAULTTOOLS
        PLUGIN_ARTISTICTEXTSHAPE
        PLUGIN_DOCKERS
        PLUGIN_PATHSHAPES
        PLUGIN_VARIABLES
        PLUGIN_CHARTSHAPE
        PLUGIN_PICTURESHAPE
        PLUGIN_TEXTSHAPE
        PLUGIN_PLUGINSHAPE
        PLUGIN_FORMULASHAPE
        PLUGIN_VECTORSHAPE
        PLUGIN_VIDEOSHAPE
        # filters
        FILTERS_STAGE
)

There are a number of predefined product sets, but everyone can add their own custom product set by adding a file locally in the folder cmake/productsets, named with the name of the productset in lowercase and the extension “.cmake” and containing simply the definition as described above.
The ids of products and features (but not sets) are used to generate CMake variables SHOULD_BUILD_${ID}, which then are used to control what is build and how.

Deciding what to build

The product set(s) to build are passed via the CMake flag PRODUCTSET and are a whitespace separated list of product sets, products and features, though usually just a single product set, e.g. the predefined “ALL”, which is also the default.

Based on the dependency tree that is resulting from the definition of all products, features and product sets, then the internally required products and features are estimated for the requested set.
Following that it is checked whose of those have all needed external dependencies or must be disabled from the build. Finally then the internal dependencies are checked again, and the final set of products and features that will be really built is collected.

Seeing the dependencies

With the knowledge about the internal dependencies available, one is tempted to export this data in a format that can further processed, e.g. to visualize it. And thus now when running CMake, a file product_deps.dot in DOT notation is generated in the top-level build directory. This one can e.g. be transformed on the commandline into a PNG file, by

dot -Tpng product_deps.dot > product_deps.png

The following is generated for me currently when “ALL” products and features should be build (I am missing a few external dependencies for some filters):

Calligra product set "ALL"

Calligra product set “ALL”

If I would like to only build the KEXI and SHEETS product sets, by passing -DPRODUCTSET="KEXI SHEETS" to CMake, the graph changes to this, showing that only the products will be built which are required or optional in the dependency trees of both product sets:

Calligra product set "KEXI SHEETS"

Calligra product set “KEXI SHEETS”

More ideas

Besides creating pretty graphs to look at to get a (better) picture, other use cases might be possible:

  • packagers could get some template file created for the packages they would create from all of Calligra
  • Libs which are dependencies to other libs or app products could be automatically added to the target_link_libraries and their headers dirs to the include_directories
  • CI build servers could only build those products and features which would be affected by the new commits

Then a use-case seems to be that people would like to select a pre-defined product set, but blacklist a few of the products/features. Support for that still has to been developed and done.

I wonder how much of all this might make sense enough to be moved into CMake itself. Currently though this whole system still needs to prove its usefulness by being adapted in more detail by all of Calligra, not only most parts. There is also a chance of having over-engineered things :) And instead Calligra should be simply split over multiple repositories as well. Not sure.

Be enlightened and inspired

This blog post basically should explain a little what all this product sets stuff is about to both all Calligra contributors who have yet not looked into details as well as externals with perhaps similar problems as the Calligra project.

If this approach could be also a solution for you, have a look at the macros in cmake/modules/CalligraProductSetMacros.cmake, they should be reusable outside of Calligra as well, only the new macro calligra_product_deps_report has Calligra specific code inside and could be made generic as well, if there is interest.

kickstarter-29-front-ban

Next Page »

The Toni Theme. Blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.