Attracted by virtual constructs

September 5, 2014

Workspace-wide services on non-file objects

Filed under: KDE — by frinring @ 10:28 pm

As a user…

Have you ever copied some text from e.g. Okular, KMail or LibreOffice to Plasma KRunner, to invoke some service on it, ideally based on auto-recognition of the data? And wished, you could just have already got in the context menu on the selected text the respective service you were going for?
Or have looked in the context menu of an image in a PDF, a website in Firefox or a database in Kexi and wondered why the context menu does not show at least the “Send to” services from the Kipi plugins?

As a developer…

Have you ever written a parser for plain text which detects certain things like urls or telephone numbers, then tags those text parts, to be able to highlight them and to offer certain actions on them? Only to find out that other programs are better in detection, for more things, and offer more or other services on those, at least that other program in its new release when you just aligned yours with their old?

If so, then we share some frustration. And an itch to scratch :)

Workspace-wide services on non-file objects

So what I would like to propose and do is a workspace-wide service system. Actually two.

The first system would make potentially all services on objects available everywhere, based on the mimetypes the program can support on export (e.g. the ones it would offer for the object to the clipboard on copy). It would also allow 3rd-parties to add new services without touching any existing programs.

The second system would make all object recognition logic available to all programs. And be extendable by 3rd-party as well without touching existing programs.

Because, why only deal with objects in the filesystem (blobs of bytes commonly called “files” ;) ) in a generic way? Why not also with objects in the composed object structures the programs have made up at runtime in the working memory and which the user can clearly address as objects in the UI?

Of course this needs to be properly done, so we do not end up with crowdy and surely improvable menus (e.g. like IMHO the “Send to…” menu in KSnapshot). For that I am happy that in the next days at Akademy the good people from the Visual Design Group are willing to offer their input on what people come to them with… you will find me queueing up for them :)

Because…
I'm going to Akademy

Data recognition system

Often data is not completely enriched with all possible semantics, there is a final enrichment done only by a human looking at the presentation of the data. E.g.

  • items in a picture (like a cat, a flower or a QR code)
  • items in some plain text (like a phone number or the name of a person)
  • items in some partially enriched text (like an email address in a comment in source code)

Or think about items in a sound, while not that typically presented in spatial way on a screen, still there is data recognition going on there as well, like a spoken word, barking or a speaker (or a dog, if you are into dogs :) ).

Some programs have some hardcoded data recognition system, e.g. Digikam for faces of humans, Konsole for urls in console output, KMail for urls and email addresses. Their code is not shared with other programs, everyone would have to reimplement it. Kate and Okteta would have to write their own url detection code, even Rekonq, Okular and Calligra, for text not yet marked-up as url. And Gwenview will have to do its own thing for face detection.

So I imagine a set of globally installed data recognition plugins which can be called on some given data and would report where they detected which objects. They would also mark objects with a state, like just a guess or sure thing, and if there is one or multiple options for the semantic (e.g. for non-unique names of contacts matched in the addressbook).

For text, here a list of things that could be detected in plain text and where you surely can imagine some services on: geocoordinates, date, time, phone number, url, email address, irc/chat nickname, irc channel, name of person, calculation, currency amount, value with physical unit, RGB value, abbreviation, identifying names of objects (like cities, countries, buildings, satellites), program name, you-name-it…

For many of these there are already recognition parsers in Plasma KRunners (even for geocoordinates with the Marble Plasma Runner). Time to share them with the whole system!

Services system

Many of the services I think of are those you can already find offered by the Plasma KRunners: doing some action based on some data provided.
Now the system should be able more than that, I would like to have these four kind of service types:
* action based on data (read-only with regard to the original data)
* manipulating action based on data (data returning a substitute for the original data)
* action based on data combined with other data (e.g. triggered by drag’n’drop)
* manipulating action based on data combined with other data

When querying for services, the possible mimetypes of the data should be passed (like with clipboard). For some of the mentioned things above this will mean newly invented mimetypes (e.g. for irc nickname or value with physical unit), but this seems okay. Some services will want to inspect the actual data to see if they do support something. Also will context & some metadata information (like the container) be helpful as well (e.g. for a translation service). Some services are cheap/okay to be queried for support/run as often as wanted, some are not (e.g. public web services run by private). Some services can be data-risky (do profiling by the seen data or risk lacking private info). All that should be accounted for in some way.

Some semantics of the services will be needed, to assist in presentation in the UI (e.g. “send copy of data somewhere”, “show info about data”, etc.)

Programs would install context files, which could be used to configure when to offer which services (done by whitelist/blacklist of services). The UI should offer typically used services in quickly accessible/discoverable ways (like direct items in the context menu).

Perhaps there is even a fifth kind of service possible, something that feeds the tooltip or some infobox with data about the object (like a business card for person from addressbook or a map for a location).

All this should allow services like “Offer translation”, “Alternative word proposal”, “Correction proposals”, “Look up in Wikipedia/knowledge db and show mini info card”, “do calculation” (on data of type formular-data), “Convert to other unit” (on data of type value with unit), “Start program”, “Open file”, “Show color”, “Look for offers in internet shop”, you-get-the-idea.

This service system might be similar to something done in NeXTSTEP, at least I remember having read about that one day. And Android also possibly features something similar, from what I understood. If have you pointers to details about those, and other similar systems, please post them in the comments, so the concepts could be looked at and learned from as well. I still need to any research on pre-existing concepts, currently still busy with designing this proposal itself some more.

Ideally these systems are done with cross-desktop orientation in mind. At least for the services that should be doable, as service registration and service execution could be done via the abstraction layers of D-Bus, so the actual implementation does not matter. For the data recognition system I am not so sure yet, as multiple plugins all getting full data copies passed to do their special recognition on sounds rather heavy. No idea how shared memory would help here without introducing other problems?

Please give your input in the comments below, interested what you think of this.
I hope to also find a place for a BoF here at Akademy, for some proper feedback on the plan and hopefully implementation helpers :)

July 28, 2014

WebODF easily used, part 1: ViewerJS

Filed under: KDE,WebODF — by frinring @ 3:03 am

WebODFYou possibly have heard of WebODF already, the Open Source JavaScript library for displaying and editing files in the OpenDocument format (ODF) inside HTML pages. For ideas what is possible with WebODF and currently going on, see e.g. Aditya’s great blog posts about the usage of WebODF in OwnCloud Documents and Highlights in the WebODF 0.5 release.

The WebODF library webodf.js comes with a rich API and lots of abstraction layers to allow adaption to different backends and enviroments. There is an increasing number of software using WebODF, some of that listed here.

Those which are interested in the capabilities of WebODF, without needing a custom and highly integrated solution, can additionally go for ready-made simple-to-use components based on WebODF. This blog post is the first of a series to introduce you to those. It starts with the component that gives you embedded display of OpenDocument format files, that is text documents (ODT), presentation slides (ODP) and spreadsheets (ODS), in webpages by just a single (sic!*) line of HTML code:
* no-one would add a line-break there ;)

ViewerJS

ViewerJS is an Open Source document viewer that enables embedded display of ODF or PDF files directly in webpages, without any external server dependencies, just done with HTML, CSS and Javascript. It uses WebODF to display files in the OpenDocument format and PDF.js for files in the PDF format.

Deploying and using ViewerJS with your webpages can be done in a few minutes. Follow this guide and see yourself!

Quickly Added

Start with looking at the current time and noting it.

As example file take an ODP of your choice, otherwise let’s use the slides from a talk at KDE’s Akademy in 2013, akademy2013-ODF-in-KDE-Calligra-WebODF.odp.

If you do not have a webserver handy, create a mini one locally on your system:


# Create a folder structure to serve statically
mkdir htroot

# Put the sample ODP file into htroot, renamed as "example.odp"
cp akademy2013-ODF-in-KDE-Calligra-WebODF.odp htroot/example.odp

# Add a simple html file:
touch htroot/example.html

Open example.html in an editor and have this as content:

<!DOCTYPE HTML>
<html>
  <head>
    <title>example.odp</title>
  </head>
  <body>
    <div>We got an ODP file.</div>
    <div>Would be nice to show it here.</div>
  </body>
</html>

Start a simple webserver program serving that directory, e.g. the one built into Python. For that open a separate console and do:


cd htroot
python -m SimpleHTTPServer

example.odp not embeddedNow browse to http://127.0.0.1:8000/example.html and make sure you see that HTML file.

The ODP file example.odp is not displayed yet, right. Not so nice.

Let’s change that and deploy ViewerJS for it.

In the first console now do:


# Download http://viewerjs.org/releases/viewerjs-0.5.2.zip
# (check if there is a newer version perhaps, then change
# all "0.5.2" below to the new version string)
wget http://viewerjs.org/releases/viewerjs-0.5.2.zip

# Unzip the file
unzip viewerjs-0.5.2.zip

# Move the folder "ViewerJS" to the toplevel dir of
# the folder structure statically served by the webserver
# (could also be a non-toplevel dir)
mv viewerjs-0.5.2/ViewerJS htroot

Now replace the “Would be nice to show it here.” in the example.html with this code (remove the REMOVEME, workaround to strange WordPress behaviour):

<REMOVEMEiframe id="viewer" src="/ViewerJS/#../example.odp" width='400' height='300' allowfullscreen webkitallowfullscreen></iframe>

(in the sources one line, as promised. But add line-breaks as you like ;) )

example.odp embedded with ViewerJSNow reload http://127.0.0.1:8000/example.html in your browser. And if everything worked out, you see the ODP file now embedded in the webpage, ready to be read or e.g. presented fullscreen.

Look again at the current time. How minutes did you need? :)

ODF or PDF

For publishing done documents that should be only read and not further processed, PDF is the better choice IMHO, because the format specifies the exact positioning of everything.
ODF (same with similar formats like OOXML) leaves the actual fine-layout to the program displaying/printing the document, which can differ between computer systems and setups, usually due to the used font engine. This makes sense, as it allows to create ODF files from code that has no clue about layout calculations, e.g. some Perl script generating a report. But it can result in frustrations if some document with manually optimized layout gets differently layout-ed elsewhere.

Thanks to PDF.js ViewerJS can also nicely display PDFs, so use whatever format suits the needs, be it preview of some document to further process or display of the final result.

Take a PDF file and change the above example to show that instead of the ODP file. Then try also with an ODT or ODS file.

Getting better week by week

The developers of WebODF are constantly enhancing its coverage of the ODF spec. See how the slides template for this year’s GUADEC (of course done in ODP :) ) are almost looking the same in LibreOffice and ViewerJS (v0.5.2):
GUADEC2015SlideDesign in LibreOfficeGUADEC2014SlideDesign in ViewerJS

Currently the Wiki hosting the GUADEC slide templates still has to say:

Current configuration does not allow embedding of the file lightning_talks.odp because of its mimetype application/vnd.oasis.opendocument.presentation

ViewerJS and WebODF hopefully can be a reason to change that soon :)

When giving talks about WebODF of course ODPs and ViewerJS are used. Knowing the pitfalls the slides can be done avoiding those. Still many real-life samples not designed for current WebODF capabilities are increasingly well displayed, also e.g.
050 in LibreOffice050 in ViewerJS
or
MCT in LibreOfficeMCT in ViewerJS

In general are ODF documents with only formatted text and images in SVG, PNG, JPEG or similar no problem for WebODF and thus ViewerJS. But as can be seen next, e.g. native ODF graphic elements are still a TODO (and the result not related to any censoring code ;) ). But, the display is already good enough for a “preview” :) :
DLP in LibreOfficeDLP in ViewerJS

BTW, if you are browsing a website that does not yet use ViewerJS to display ODF files embedded but only provides them as links, there is another WebODF-based option for Firefox users: the ODF viewer Firefox Add-on, that allows viewing ODF documents directly in Firefox on any device, without the need of a (big) office suite.

More on ViewerJS.org

Learn more about ViewerJS on the website ViewerJS.org, e.g. how to support non-embedded custom fonts. Discover the ViewerJS plugin for WordPress. Think about how you and your websites could make use of ViewerJS and how you could help to improve ViewerJS and WebODF, and then contact the ViewerJS and WebODF developers about that! They are looking forward to working together with you as well.

July 5, 2014

Calligra sprint in full process

Filed under: Calligra,KDE — by frinring @ 12:13 pm

One week-end of Calligra sprint is currently going on in the old and cozy center of Deventer in the Netherlands.
Yesterday everyone safely arrived, and have been in discussions since then… Right now we are all sitting in the livingroom of Boudewijn and Irina (who are being great hosts to the sprint) around the coffee table, everyone a laptop on their lap, in after-lunch digesting mode, with the full discussions to be continued now every minute.

June 18, 2014

Thumbnails & previews for your Geo Data files (KML, GPX, OSM, …)

Filed under: Calligra,KDE,Marble — by frinring @ 10:02 pm

Almost two years ago (uh, already?) I went to Prague for a developer sprint of the team of Marble, the virtual globe and world atlas. My goal was to work on a proper maps shape plugin for Calligra.

We are ready to render, or? … Or?

Just, it showed that Marble at that time was not properly supporting the needs such a plugin has. Biggest showstopper was that Marble did not expose the information if and when all external data needed for rendering was available. Think e.g. of map pieces (so called tiles) still having to be downloaded from a server like OpenStreetMap‘s one. Not knowing the state is not good e.g. if a document with a map is to be rendered to a real printer.

Telling the File Manager some more about Marble and Geo Data Files

At that sprint then I worked instead on making sure that there are proper mime-types registered for the types of geo data files that Marble can read and display (like OSM data files, ESRI shapefiles and GPX files). So that such files are nicely displayed in the file manager with matching icon & type description and can get Marble assigned as handler. And:

[...]
Next step is to finish the thumbnail plugin, so one can also have nice previews of the content of these filetypes.
[...]

The thumbnailer was started in the train back from Prague, but Dresden was arrived before it was properly working. Then… the code went out of focus and mind, that step frozen in the middle of the air…

Getting ready to render!

These days, Calligra still has no maps shape plugin. Now and then I nag^Wbeg the Marble developers for the issue (how lame, indeed), as there are other interesting use cases for rendering of map stills, like:

  • Creating a jigsaw puzzle for Palapeli, the jigsaw puzzle gaming program
  • Creating an animation for a movie in Kdenlive, the video editor
  • Yes, even creating the thumbnails for geo data files, if not using the simple installed map theme

Last week luckily earthwings found some personal interest to draft code for renderer state. Time to add a bit of momentum, by reviving the thumbnailer code, to serve as a use case. Still found in its old place, after some bit-rot cleanup and finally solving the problems of last time (different mindset, and things seem so obvious) it works now good enough for serious usage:

Thumbnails for Geo Data files

Thumbnails for Geo Data files

Some things need some more thinking (which map theme to use when, how to properly detect the sky object the data is for, how to deal with thumbnail generation in offline-state), but it’s a good first complete step :) It does not yet use the new render state feature, but should be soon.

So look forward to Marble 1.9, bringing previews for geo data files to your filemanager and in the file dialogs! (And do not forget to enable them in Dolphin > Settings > Configure Dolphin… > General > Previews, disabled for all types by default).

And perhaps, perhaps, one day we finally also will have a maps shape plugin for Calligra…

BTW, if you want to help development of Marble, other KDE Edu projects and more, consider to donate for enabling the Randa Meetings 2014. Every little amount will help, as it sums up. Best do it now! :)

June 17, 2014

Managing internal dependencies in a build of Calligra

Filed under: Calligra,KDE — by frinring @ 1:17 am

During the move of KDE’s software projects from Subversion to Git most projects split their subprojects over multiple Git repositories. Calligra did not, but is keeping all code of all apps and extras in one single repository. That is all of the apps Author, Braindump, Flow, Karbon, Kexi, Krita, Plan, Sheets, Stage and Words as well as all of the extras like the file format converter, the Okular generators, file thumbnailers and other file manager integration.
One of the reasons is that many libraries and plugins are shared among the different programs, and the API of the libraries is still changing a lot between releases. By having API-changing commits to be atomic to all of the Calligra code, all developers have less problems to have consistent revisions of the libs and programs.

All can be too much

A downside is: people interested in only one of the Calligra programs still have to get all of the Calligra code and also are faced with possibly having to build all of Calligra. Such people could be developers working only on e.g. Kexi, users wanting to only build the bleeding edge of their favourite program, e.g. Krita, or packagers/integrators only interested in e.g. viewer components for office file formats.

To support the different building needs, by the time more and more all kind of if(SOMEFLAG) [...] endif(SOMEFLAG) were added to random CMakeLists.txt files. And with additionally the conditional building due to optional external dependencies, things got complex. And thus sometimes broken.

Products, Features, and Product Sets

To get things more structured again, the concepts of “product”, “feature” and “product set” have been introduced to describe the stuff that gets build and their internal dependencies:

A “product” is the smallest functional unit which can be created in the build and which is useful on its own when installed. Examples are e.g. libraries, plugins or executables. Products have external and internal required dependencies at build-time. Internal dependencies are noted in terms of other products or features (see below) and could be e.g. other libraries to link against or build tools needed to generate source files. A product gets defined by setting an identifier, a descriptive fullname and the needed internal build-time requirements. Any other product or feature listed as requirement must have been defined before.

Example:

calligra_define_product(BUILDTOOL_RNG2CPP "rng2cpp")
calligra_define_product(LIB_CALLIGRA "Calligra core libs"  REQUIRES BUILDTOOL_RNG2CPP)

A “feature” is not a standalone product, but adds abilities to one or multiple given products. One examples is e.g. scriptability. Features have external and internal required dependencies at build-time. Internal dependencies are noted in terms of other products or features and could be e.g. other libraries to link against or build tools needed to generate source files. A feature gets defined by setting an identifier, a descriptive fullname and the needed internal build-time requirements. Any other product or feature listed as requirement must have been defined before.

Example:

calligra_define_feature(FEATURE_SCRIPTING "Scripting feature")

A “product set” is a selection of products which should be build together. The products can be either essential or optional to the set. If essential (REQUIRED), the whole product set will not be build if a product is missing another internal or external dependency. If optional (OPTIONAL), the rest of the set will still be build in that case.
The products to include in a set can be listed directly or indirectly: they can be named themselves, or another product set can be included in a set, whose products will then be part of the first set as well.
Products and product sets can be listed as dependencies in multiple product sets. As with dependencies for products, they must have been defined before.

Example:

calligra_define_productset(STAGE "Full Stage (for Desktop)"
    REQUIRES
        APP_STAGE
    OPTIONAL
        # extras
        FILEMANAGER
        # plugins
        PLUGIN_DEFAULTTOOLS
        PLUGIN_ARTISTICTEXTSHAPE
        PLUGIN_DOCKERS
        PLUGIN_PATHSHAPES
        PLUGIN_VARIABLES
        PLUGIN_CHARTSHAPE
        PLUGIN_PICTURESHAPE
        PLUGIN_TEXTSHAPE
        PLUGIN_PLUGINSHAPE
        PLUGIN_FORMULASHAPE
        PLUGIN_VECTORSHAPE
        PLUGIN_VIDEOSHAPE
        # filters
        FILTERS_STAGE
)

There are a number of predefined product sets, but everyone can add their own custom product set by adding a file locally in the folder cmake/productsets, named with the name of the productset in lowercase and the extension “.cmake” and containing simply the definition as described above.
The ids of products and features (but not sets) are used to generate CMake variables SHOULD_BUILD_${ID}, which then are used to control what is build and how.

Deciding what to build

The product set(s) to build are passed via the CMake flag PRODUCTSET and are a whitespace separated list of product sets, products and features, though usually just a single product set, e.g. the predefined “ALL”, which is also the default.

Based on the dependency tree that is resulting from the definition of all products, features and product sets, then the internally required products and features are estimated for the requested set.
Following that it is checked whose of those have all needed external dependencies or must be disabled from the build. Finally then the internal dependencies are checked again, and the final set of products and features that will be really built is collected.

Seeing the dependencies

With the knowledge about the internal dependencies available, one is tempted to export this data in a format that can further processed, e.g. to visualize it. And thus now when running CMake, a file product_deps.dot in DOT notation is generated in the top-level build directory. This one can e.g. be transformed on the commandline into a PNG file, by

dot -Tpng product_deps.dot > product_deps.png

The following is generated for me currently when “ALL” products and features should be build (I am missing a few external dependencies for some filters):

Calligra product set "ALL"

Calligra product set “ALL”

If I would like to only build the KEXI and SHEETS product sets, by passing -DPRODUCTSET="KEXI SHEETS" to CMake, the graph changes to this, showing that only the products will be built which are required or optional in the dependency trees of both product sets:

Calligra product set "KEXI SHEETS"

Calligra product set “KEXI SHEETS”

More ideas

Besides creating pretty graphs to look at to get a (better) picture, other use cases might be possible:

  • packagers could get some template file created for the packages they would create from all of Calligra
  • Libs which are dependencies to other libs or app products could be automatically added to the target_link_libraries and their headers dirs to the include_directories
  • CI build servers could only build those products and features which would be affected by the new commits

Then a use-case seems to be that people would like to select a pre-defined product set, but blacklist a few of the products/features. Support for that still has to been developed and done.

I wonder how much of all this might make sense enough to be moved into CMake itself. Currently though this whole system still needs to prove its usefulness by being adapted in more detail by all of Calligra, not only most parts. There is also a chance of having over-engineered things :) And instead Calligra should be simply split over multiple repositories as well. Not sure.

Be enlightened and inspired

This blog post basically should explain a little what all this product sets stuff is about to both all Calligra contributors who have yet not looked into details as well as externals with perhaps similar problems as the Calligra project.

If this approach could be also a solution for you, have a look at the macros in cmake/modules/CalligraProductSetMacros.cmake, they should be reusable outside of Calligra as well, only the new macro calligra_product_deps_report has Calligra specific code inside and could be made generic as well, if there is interest.

kickstarter-29-front-ban

June 5, 2014

Calligra-powered Okular plugin for ODT, DOC & DOCX

Filed under: Calligra,KDE — by frinring @ 9:49 pm

You might know that Okular has a plugin system, for adding support for more document formats. And you might know that Calligra since years also provides a plugin to Okular, which adds support to view slides from files in the OpenDocument Presentation (ODP) format. And not only for the ODP format: by simply using the Calligra import filters for PPT and PPTX you can also view the slides locked away in those formats.

The different apps of Calligra used to be built on the KParts system, so any files in formats supported by them would be also viewable in KPart-embedding programs like Konqueror or KDevelop. But due to the currently on-going creation of a new MVC-oriented foundation for the Calligra programs this has changed, the Calligra modules are no more KParts.

Now I happen to now and then read ODT files directly in KDevelop. And Okular has some native support for ODT. But it is not as powerful as what I am used to from Calligra, so I wanted that back. The best thing of course would be to write a Calligra-plugin directly for KDevelop (like done for the Okteta integration). But I wanted something quicker, and with less work. Writing KPart-wrappers around the Calligra modules would have been the next option. But then I remembered that Sven, who did the Calligra ODP generator plugin for Okular, had also once started an ODT generator, but left it in a branch. And Okular also has an UI optimized for document reading. So a commit cherry-pick and some bit-rot fixes later I had ODT files nicely displayed in KDevelop again, thanks to the chain Calligra Words engine -> Okular KPart -> KDevelop :)

See here Okular displaying the ODT 1.2 spec, of course in ODT format:
Calligra-based ODT generator for Okular

And like the ODP generator plugin adds support for PPT and PPTX by simply using the existing filters, the very same is possible with the ODT generator plugin and the import filters for documents locked away in DOC and DOCX formats. A link on a webpage to a file in DOCX format? Click it and view the file directly in the Okular KPart, powered by Calligra’s ODT generator plugin and DOCX import filter:
Calligra-based DOCX generator for Okular

Currently the generators are just simple rendering ones. Of course we want the generators to be proper extended ones, including TextPage support, so you get all the comfort as you have when reading PDF files in Okular. Come and join the coding fun: navigate your editor to the generators’ code in extras/ and your browser to the excellent Okular generator Howto.

You instead would like to extend the support to other formats that Calligra has import filters for (Stage, Words)? Then take a look at the commit which added support for DOC and DOCX: just adding desktop files, per format one for the Application, one for the KPart and one for the generator itself :)

Waiting for your review request on the Review board (group “calligra”)!

Your preferred format is not yet there? Consider adding it, e.g. by joining the Document Liberation Project and adding also a import filter to Calligra.

So look forward to Calligra 2.9 later this year, bringing a better ODT viewer and some for DOC & DOCX to an Okular near your fingertips :) And perhaps more, at your will!

January 12, 2014

Okteta Qt5/KF5-port: now free of kde4support

Filed under: Kasten,KDE,Okteta — by frinring @ 5:24 pm

Racing close to Kate/KTextEditor on the way to Qt5/KF5 fields , the port of Okteta to Qt5/KF5 now crossed the lane and moved out of “KDE4″ terrain as well, by no longer depending on kde4support.
Of course also with zero compiler warnings during build (with build.kde.org settings) and all existing tests succeeding (to follow the environment-friendly standards set by Kate’s developers ;) ), as can be now also nicely seen on Okteta’s framework builds on build.kde.org (Thanks to the ever-awesome Ben for settings things up).

As you might remember, Alexander started the port last summer and has ever since been in the feedback cycle of KF5 and Okteta as well as other KF5 porting work (Note that the Qt5/KF5 port of Okteta has meanwhile moved to a branch named
frameworks, for consistency with the other repos).

Shifting priorities I finally managed to join his efforts (jumping on the moving train, eh), just right in time to “steal” him the milestone commit “Remove KDE4Support, no longer needed”, pardon Alex ;) , at least crediting you here.

So what is up next? Of course giving lots of feedback to the KDE Frameworks 5, as there is quite some work left to do. From simple things like incomplete CamelCase forwarding headers to platform integration issues, e.g. the filedialog ignoring the QFileDialog::FileMode set and also blocking user input after cancelling or QFontDatabase::systemFont(QFontDatabase::FixedFont) not delivering a font with fixed width.
And then thinking about adding the Okteta widgets libraries to KDE Frameworks 5. The KHexEdit Interfaces were not integrated into KF5, because the original reason for having them instead of proper libs in kdelibs was to not bloat kdelibs for just a few possible users of a hex edit widget. That reason is gone with KF5. Just, I am not yet sure about the API, and KF5 would require a stable one. So in case you are interested in Qt5/KF5 hex edit widgets, ping me and tell me your requirements (find my email address in Okteta’s file headers or the About dialog).
More, my playground project Kasten finally needs to get pushed forward and possibly soon into a repo of its own. More core/UI splitting, QML variants etc are now to do, the core/widget splitting in KF5 help here a lot. Then proper async behaviour and more. Long TODO list there.
And from Qt5/KF5 terrain besides the obvious usual target OSes also Sailfish OS, even Ubuntu Touch can be seen on the horizon, so program variants for those might be interesting. And whatever other platform can be reached from there. (Yes, you just do not yet know you might need a hex editor there :P )
So much for the current dreams ;)

The roadmap draft sees as first Qt5/KF5-based release of Okteta the version 0.14 (released “when it is ready”).
Means, there will be another, final Qt4/kdelibs4-based version of Okteta before, 0.13, to be released as part of KDE Apps 4.13 this summer, possibly having 1-2 small features added.

Lots to do. But by using KDevelop everything seems easy possible :)

July 30, 2013

Okteta ported to Qt5/KF5

Filed under: Kasten,KDE,Okteta — by frinring @ 6:38 pm

It’s now almost seven years ago, during the creation of KDE4 with all the massive porting work, that on November 27th 2006 this email was sent to Laurent Montel, one of the main people pushing that port:

Hi Laurent,

please don't spend too much effort at the old program KHexEdit, I am quite far
on the way to write a successor, called Okteta. Concerning feature
compatibility, so far I implemented around 60 % of the features of KHexEdit,
and hope to do the last 40 % until at least January. Yes, no code yet in SVN
(besides the library), but that will change in three weeks, promised.
[...]

As history has shown, things that time luckily worked out as planned, and there is now Okteta as hex editor based on the KDE4 platform. Even if it still misses 1 or 2 features from KHexEdit (but hopefully makes up for that with some new ones). ;)

Now it is porting time again, onto the new promising platform of Qt5/KF5. Myself I have so far only started into only looking into things, with this year’s Akademy being the first time to do at least a checkout of the Qt5 and KF5 sources.

Two days after the return from Akademy it was now me to receive an email, from Alexander Richardson, among other things the author of Okteta’s cool Structures tool, telling he had had a few spare days and had looked into how much work it is to port Okteta to Qt5/KF5. By simply doing it, see the branch kf5-port!

Ich hatte die letzten Tage ein bisschen Zeit und habe mal geschaut wie
aufwändig es ist Okteta auf Qt5/KF5 zu portieren.
[...]

So here is happily presented the very first public screenshot of Okteta on Qt5/KF5, brought to you by Alex, showing the raw internals of the executable Eclipse:
Okteta on Qt5/KF5, inspecting Eclipse executable

Alexander found that the porting was rather easy thanks to the KDE4Support module:

Ansonstens muss ich sagen, portieren ist ziemlich angenehm, dank KDE4Support
innerhalb eines Tages machbar. Die meisten Commits danach waren wegportieren
von KDE4Support.

So first porting to KDE4Support (he managed to do that in a day), and then seeing to resolve things to Qt5 or KF5 replacements step by step.

So if the days are again too hot outside, lock up yourself in the fridge and give a port of your program a try. And consider helping out the “busy bees” who have done so much great work already on converting kdelibs into the KDE Frameworks or Qt5, and do your small share of honey creation! (Yes, also telling that myself)

And if you need a sample program to play with Qt5/KF5, thanks to Alex work you can now use Okteta from the kf5-port branch (though you will need to apply a patch for a bug in Qt5 it exposes, which hopefully will get fixed soon, can anyone push that?).

July 27, 2013

Happy to have had been at Akademy 2013

Filed under: Calligra,Kasten,KDE,Okteta — by frinring @ 9:56 pm

Plannings for the travel were complicated & two weeks before the event things looked too expensive. But I had my talk in the program and also I just wanted to go. So I decided to try harder and then found an acceptable solution, which included renting a flat from private together with my good work-mate Jos, right across the stadium on the other side of the river.

And I am so happy I went! It was a very good time in Bilbao, I have to join the choir of those that are cheering up the organizers, helpers and sponsors:
Thank you/Eskerrik asko!

Bilbao & environment is really a nice spot, additionally the weather was perfect and the local food was so tasty. Then lots of interestings topics in the program and meeting people once again in real world. And everything well organized and cared for. Even the travel was without any missed connections or other issues. So completely happy Akademy time.

Had to deal with a challenge I did not expect: to jump on the daytrip into the water from a harbour wall, from above my height fear trigger level. Nice idea for a first time ever contact with the Atlantic ocean, I will remember that. Seeing how much fun especially Milian was having on his uncounted jumps helped me to conquer my fear 2 times for the final step/jump over the edge into the void, to be accelerated endlessly before splashing into the surprising salty water :)
It surely was interesting for the locals to see that flock of international people invade the harbour walls and then act like murres, throwing themselves into the water again and again from the height.

Another jump in new waters I did with my talk about the Kasten framework I am developing under the hoods of Okteta. So far I only had hinted at the framework in some blog posts here, because it has been still rather in initials states and changing all the time and there are quite some shortcuts in the code to make Okteta working as expected and not exposing the rough internals. Initially, when proposing the talk in spring, I had planned to get the QML parts of Kasten ready in time for the talk, so the value of Kasten might be even more visible already. But well, some life threads with higher priority required my resources more often :) So QML in Kasten is postponed for now. Still I am happy to have shown off some of the ideas behind Kasten and a little the current state of the implementation, because it meant getting more serious with it for me, and some first feedback was also collected. With all the talk about Qt5/KF5 I also decided after the talk to skip further development of Kasten on Qt4/KDELibs4, unless needed for some new features of Okteta, and go straight to Qt5/KF5 as platform, soon also in an own repository outside of Okteta’s.

And because one talk is not enough, I happened to help out in a replacement talk that Jos did just before mine, doing some promotion for the OpenDocument Format. We of course did not let slip by the opportunity to demo the related latest cool thing we have been doing at our company KO GmbH, realtime collaborative editing of ODT documents in the browser with WebODF. That was a really good experience to see lots of people from the audience log in to the editing session and showing their creativity (e.g. in placing ads*), first time outside of my working chair :)
* One really needs spam filters everywhere, sigh… ;)

Also cool was to see the first time in live how an artist is using Krita, in Timothées talk about cool new features for comics and animation work in Krita. Looked so unbelievable naturally in use, no problems, everything just worked during his talk. Really nice. He, that has also small code snippets from me somewhere :)

Another pretty interesting thing was the BoF about QmlWeb, which sadly was only on Friday, so not a lot people were present. Doing webpages in QML terms looked definitely attractive, Anton did a nice ad-hoc demo in the BoF. These days QmlWeb also became a KDE Project, for now in playground. Welcome!

Kevin Krammer’s talk about Declarative Widgets was another talk to catch my attention for the complete time and after, using QML and still QWidgets is definitely attractive. In a discussion the next day Kevin also recommended me to use QML for my Kasten framework, given that the QML engine happily works with any objects that are QObject. And a declarative definition of Kasten-based programs/modules is what I would like to support one day as well.

Learned the hard way that one has to/should register for any (sub-)events, to make sure also any goodies can be received (and this one would have been really useful to me in the next month for Kasten experiments, too bad). Then, do I really, really need it? Perhaps less is more… At least if it is about people who store my email account details ;)

Saturday after Akademy I learned on a trip to the beach “Barinatxe” as recommended by our flat-owner (BTW, also awesome from the Akademy organizers to have got free metro tickets for everyone, for the complete tracks) that beaches can be too hot for more reasons than what you(?) would think: the sand was simply too hot to go bare-footed on it more than a few meters, possibly due to his rather dark color. And one needed a thicker towel to stay laying there with comfort. Funny to see all the people always running from their place into the water.

Found a small shell on the daytrip, that will remind me a little longer here on my desktop of this really fine Akademy. Well done, everyone!

March 8, 2013

Calligra Spring 2013 Sprint started

Filed under: Calligra,KDE — by frinring @ 6:53 pm

The Calligra contributor community finally is meeting again for a sprint weekend, both virtually and in real life: There are 6 people at the ThoughtWorks Bangalore office in India, sitting and hacking on stuff already since the morning. And 11 people are gathering at the Linuxhotel in Europe until the evening, to follow and join them the next two days. Other people are popping up in the random Google Hangout sessions, and of course in the IRC channel #calligra.

Today was arrival day, so more or less dynamically structured. Still the Krita people had already their BoF, as most of them arrived early. Tomorrow then there will be great discussion day, topics will be e.g. a new document/view-architecture and improving QML-support.

calligra-logo-200

With doing a few more 2.x releases in the futures, Calligra is slowly approaching the 3.0 version, as a milestone where the individual programs not only are useful as serious viewers, with e.g. excellent import filters for MS formats, but finally also as reliable, easy to use and well integrated editors (which most still need to become).

Krita, as the current flagship, is already making waves in the world of movie and GFX studios, also Intel having used a special version (Krita Sketch) at their CES booth!
Author is going to find a so far unclaimed niche, while Kexi is getting closer to occupy its targetted one. Words, Stage, Sheets are offering alternative UX to what AOO | LO | MS have. Plan quietly evolves into a serious project planner. And more.
While these are all exciting developments, there are also new challenges in the future: KF5 & Qt5 & QML2 & Plasma Active.
Also some old challenges are still around: while now only more Kexi has Qt3Support dependencies, the started big refactoring of the central Calligra libraries waits to be finished.

There are lots of reasons to keep on pushing Calligra programs and libraries: built on Qt/kdelibs and with a quite modular architecture, it’s quite easy to adapt to new platforms out there, which e.g. can be seen with Calligra Active or the plugins for Okular, which have been done with only little effort in comparison. And Qt5 brings even more hope and options.

This sprint would not be possible without the supporters of KDE e.V., thanks to them to make it financially possible for us to meet up to develop plans for the future roadmaps. So if you, dear reader, want to do your little contribution to the future of KDE software as well, consider to Join The Game as a supporter of the KDE e.V. !

Thanks also to KO GmbH for supporting the sprint, to ThoughtWorks Bangalore for hosting the Indian part of the sprint and for the Linuxhotel for the community-friendly offering in their great setting for the European part. And thanks to Claudia, the KDE e.V.s business manager, for her quick and uncomplicated handling of any issues.

Next here in Linuxhotel: Pasta self-cooking for dinner (that’s why there are pasta sauce recipes on the sprint planning page ;) ). Oh, got ready while writing this post, actually next is Pasta self-eating :)

Next Page »

The Toni Theme. Blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.