Include also moc files of headers

And what about…

While talking about the build time improvements seen by avoiding the use of Qt module header includes Volker Krause wondered: in chat

regarding the compile time improvements, I have the suspicion that included moc files would help with incremental build times, possibly even quite noticeably (compiling the combined automoc file can be quite expensive), but no idea how that impacts clean builds

And while he was occupied with other things, this suspicion caught my interest and curiousity, so I found some slots to give it some closer look and also learn some more.

After all, people including myself had removed quite some explicit moc includes by the years, also in KDE projects, enjoying existing automoc magic for less manual code. Just that in the mean time, as soon noticed, Qt developers had stepped up efforts to add them for the Qt libraries were missing, surely for reasons.

Back to the basics…

Let’s take a simple example of two independent QObject sub-classes Foo and Bar, with own header and source files:

foo.h: class Foo : public QObject { Q_OBJECT /* ... */ };

bar.h: class Bar : public QObject { Q_OBJECT /* ... */ };

foo.cpp: #include "foo.h" /* non-inline Foo method definitions */

bar.cpp: #include "bar.h" /* non-inline Bar method definitions */

CMake’s automoc will detect the respective Q_OBJECT macro usages and generate build system rules to have the moc tool create respective files moc_foo.cpp and moc_bar.cpp, which contains the code complementing the macro (e.g. for the class meta object).

CMake then, if no source files include those generated moc files, will have added rules to generate for each library or executable target a central file mocs_compilation.cpp which includes those:

// This file is autogenerated. Changes will be overwritten.
#include "<SOURCE_DIR_CHECKSUM>/moc_foo.cpp"
#include "<SOURCE_DIR_CHECKSUM>/moc_bar.cpp"

This results in a single compilation unit with all the moc code. It is faster to build compared to compiling all moc files in separate ones. Note the “all” here, as all moc code only needs to be build together in full project (re)builds.

Incremental build, wants to handle minimal size of sources

When working on a codebase, one usually does incremental builds, so only rebuilding those artifacts that depend on sources changed. That gives quick edit-build-test cycles, helping to keep concentration stable (when no office-chair sword duel tournaments are on-going anyway).

So for the example above when the header foo.h is edited, in an incremental build…

  1. the file foo.cpp is recompiled as it includes this header…
  2. next moc_foo.cpp to be regenerated from the header and then…
  3. mocs_compilation.cpp to be recompiled, given it includes moc_foo.cpp.

Just, as mocs_compilation.cpp does not only include moc_foo.cpp, but also moc_bar.cpp, this means also the code from moc_bar.cpp is recompiled here, even if does not depend on foo.h.

So the optimization of having a single compilation unit for all moc files for headers, done for full builds, results in unneeded extra work for incremental builds. Which gets worse with any additional header that needs a moc file, which then also is included in mocs_compilation.cpp. And that is the problem Volker talked about.

Impact of mocs_compilation.cpp builds

On the author’s system (i5-2520M CPU @ 2.5 GHz, with SSD) some measurements were done by calling touch on a mocs_compilation.cpp file (touch foo_autogen/mocs_compilation.cpp), then asking the build system to update the respective object file and measuring that with the tool time (time make foo_autogen/mocs_compilation.cpp.o).

To have some reference, first a single moc file of a most simple QObject subclass was looked at, where times averaged around 1.6 s. Then random mocs_compilation.cpp found in the local build dirs of random projects were checked, with times measured in the range of 5 s to 14 s.

Multiple seconds spent on mocs_compilation.cpp, again and again, those can make a difference in the experience with incremental builds, where the other updates might take even less time.

Impact of moc include on single source file builds

Trying to measure the cost which including a moc file adds to (re)compiling a single source file, again the tool time was used, with the compiler command as taken from the build system to generate an object file.

A few rounds of measurement only delivered average differences that were one or two magnitudes smaller than the variance seen in the times taken, so the cost considered unnoticeable. A guess is that the compiler for the moc generated code added can reuse all the work already done for the other code in the including source file, and the moc generated code itself not that complicated relatively.

This is in comparison to the noticeable time it needs to build mocs_compilation.cpp, as described above.

Impact of moc includes on full builds, by examples

An answer to “no idea how that impacts clean builds” might be hard to derive in theory. The effort it takes to build the moc generated code separately in mocs_compilation.cpp versus the sum of the additional efforts it takes to build each moc generated code as part of source files depends on the circumstances of the sources involved. The measurements done before for mocs_compilation.cpp and single source files builds though hint to overall build time reduction in real-world situations.

For some real-world numbers, a set of patches for a few KDE repos have been done (easy with the scripts available, see below). Then some scenario of someone doing a fresh build of such repo using the meta-build tool kdesrc-build was run a few times on an otherwise idle developer system (same i5-2520M CPU @ 2.5 GHz, with SSD), both for the current codebase and then with all possible moc includes added.

Using the make tool, configured to use 4 parallel jobs, with the build dir always completely removed before, and kdesrc-build invoked with the –build-only option, so skipping repo updates, the timing was measured using the time tool as before. Which reports by “real” the wall clock timing, while “user” reports the sum of times of all threads taken in non-kernel processor usage. The time spent by related kernel processing (“sys”) was ignored due to being very small in comparison.

The numbers taken in all cases showed that there clean builds got faster with moc includes, with build times partially reduced by more than 10 %:

Before“moc includes”ReductionAverage ofVariance
LibKDEGames (MR)real1m 02,18s0 min 58,46 s6 %5 runs2 s
user3m 06,37s2 min 48,13 s10 %3 s
KXmlGui (MR)real2 min 26,62 s2 min 09,09 s12 %3 runs
user7 min 34,42 s6 min 35,07 s13 %
Kirigami (MR)real1 min 32,83 s1 min 29,79 s3 %3 runs
user4 min 25,67 s4 min 19,94 s2 %
NetworkmanagerQt (MR)real11 min 48,10 s11 min 18,57 s4 %1 run
user40 min 39,78 s39 min 05,28 s4 %
KCalendarCore (MR)real3 min 09,91 s2 min 42,83 s14 %3 runs
user10 min 17,57 s8 min 54,90 s13 %

Further, less controlled own time measurements for other codebases support this impression, as well as reports from others (“total build time dropped by around 10%.”, Qt Interest mailing list in 2019). With that for now it would be assumed that times needed for clean build are not a reason against moc includes, rather the opposite.

And there are more reasons, read on.

Reducing need for headers to include other headers

moc generated code needs to have the full declaration of types used as values in signals or slots method arguments. Same for types used as values or references for Q_PROPERTY class properties, in Qt6 also for types used with pointers:

class Bar; // forward declaration, not enough for moc generated code here

class Foo : public QObject {
    Q_OBJECT
    Q_PROPERTY(Bar* bar READ barPointer) // Qt6: full Bar declaration needed
    Q_PROPERTY(Bar& bar READ barRef)     // full Bar declaration needed
    Q_PROPERTY(Bar bar  READ barValue)   // full Bar declaration needed
Q_SIGNALS:
    void fooed(Bar bar); // full Bar declaration needed
public Q_SLOT:
    void foo(Bar bar);   // full Bar declaration needed
    // [...]
};

So if the moc file for class Foo is compiled separately and thus only sees the given declarations as above, if will fail to build.

This can be solved by replacing the forward declaration of class Bar with the full declaration, e.g. by including a header where Bar is declared, which itself again might need more declarations. But this is paid by everything else which needs the full class Foo declaration now also getting those other declarations, even if not useful.

Solving it instead by including the moc file in a source file with definitions of class Foo methods, with full class Bar declaration available there, as usually already needed for those methods, allows to keep the forward declaration:

#include "foo.h"
#include "bar.h" // needed for class Foo methods' definitions
// [definitions of class Foo methods]
#include "moc_foo.cpp" // moc generated code sourced

Which keeps both full and incremental project builds faster.

In KDE projects while making them Qt6-ready a set of commits with messages like “Use includes instead of forward decl where needed” were made, due to the new requirements by moc generated code with pointer types and properties. These would not have been needed with moc includes.

Enabling clang to warn about unused private fields

The clang compiler is capable to check and warn about unused private class members if it can see all class methods in the same compilation unit (GCC so far needs to catch up):

class Foo : public QObject {
    Q_OBJECT
    /* ... */
private:
    bool m_unusedFlag;
};

The above declaration will see a warning if the moc file is included with the source file having the definition of all (normal) non-inline methods:

/.../foo.h:17:10: warning: private field 'm_unusedFlag' is not used [-Wunused-private-field]
   bool m_unusedFlag;
        ^

But not if the moc file is compiled separately, as the compiler has to assume the other methods might use the member.

Better binary code, due to more in the compilation unit

A moc include into a source file provides the compiler with more material in the same compilation unit, which is said to be usable for some optimizations:

Indeed when building libraries in Release mode, so with some optimization flags enabled, it can be observed that size shrank by some thousandths for some. So at least size was optimized. For others though it grew a tiny bit, e.g. in the .text section with the code. It is assumed this is caused by the code duplications due to inlining. So there runtime is optimized at the cost of size, and one would have to trust the compiler for a sane trade-off, as done with all the other, normal code.

For another example, one of the commits to Qt’s own modules establishing moc includes for them reports in the commit message for the QtWidgets module:

A very simple way to save ~3KiB in .text[edit] size and 440b in data size on GCC 5.3 Linux AMD64 release builds.

So far it sounds like it is all advantages, so what about the disadvantages?

More manual code to maintain with explicit moc include statements

To have explicit include statements for each moc file covering a header (e.g. moc_foo.cpp for foo.h) means more code to manually maintain. Which is less comfortable.

Though the same is already the case for moc files covering source files (e.g. foo.moc for foo.cpp), those have to be included, given the class declarations they need are in that very source file. So doing the same also for the other type would feel not that strange.

The other manual effort needed is to ensure that any moc include is also done. At least with CMake’s automoc things will just silently work, any moc file not explicitly included is automatically included by the target’s mocs_compilation.cpp file. That one is currently always generated, built and linked to the target (TODO: file wish to CMake for a flag to have no mocs_compilation.cpp file).

One approach to enforce moc includes might be to add respective scripts as commit hooks, see. e.g. check-includemocs-hook.sh from KDAB’s KDToolBox.

No longer needed moc includes are also not critical with CMake’s automoc, an empty file will be generated and a warning added to the build log. So the developer can clean-up later when there is time.

So the cost is one include statement per moc-covered header and its occasional maintenance.

Automated moc file include statements addition, variant 6

There exist already a few scripts to scan sources and amend include statements for moc files where found missing, like:

Initially I was not aware of all. The ones tested (KDE’s, KDAB’s & Remy van Elst’s) missed to cover matching header files with the basename suffixed by “_p” (e.g. foo_p.h) to source files without that suffix (e.g. foo.cpp). So there is now a (working for what used for) draft of yet another script, addmocincludes. Oh dear 🙂

Suspicion substantiated: better use moc includes

As shown above, it looks that the use of explicit includes also for header moc files improves things for multiple stakeholders:

  • developers: gain from faster full & incremental builds, more sanity check
  • users: gain from runtime improvements
  • CI: gains from faster full builds
  • packagers: gain from faster full builds

All paid by the cost of one explicit include statement for each moc-covered header and its occasional maintenance. And in some cases a slightly bigger binary size.

Seems a good deal, no? So…

  1. pick of one the scripts above and have it add more explicit moc includes
  2. check for some now possible forward declarations
  3. look out for any newly discovered unused private members
  4. PROFIT!!! (enjoy the things gained long term by this one-time investment)
Update (Aug 14th):

To have the build system work along these ideas, two issues have now been filed with CMake’s issue tracker:

Review use of Qt module header includes

Wait a minute…

Having come across sources using include statements for some Qt module headers (like #include <QtDBus>), memories arose about a check from the static analyzer tool krazy as once run conveniently on KDE’s former ebn.kde.org site. That check, called includes, poked one not to use Qt module headers. Due to resulting in the inclusion of all the headers of those modules, and then again that of the other Qt modules used by the module. Which them meant more stuff to process by the compiler for compilation units with such module header includes.

So is that perhaps in 2023 no longer a real-world noticeable issue? A first look at some preprocessor outputs (with Qt5) for a single line file with just an include statement hinted though it might still be true:

foo.cpp: #include <QtDBus>foo.cpp.i: 137477 lines
foo.cpp: #include <QDBusReply>foo.cpp.i: 86615 lines

So 50862 more code lines of mainly declarations and inline methods, where a good part might not be needed at all by other code in a file including the header, yet is processed each time. And if such includes are placed in headers, happening for a lot of compilation units. Given most normal source files are shorter, it seemed like as result this difference might still be noticeable given that order of magnitude in the extreme example above..

Wait some minutes less now

The KDE Frameworks module NetworkManagerQt was found to use quite a lot of QtDBus module header includes. While overall following mostly the include-only-what-you-need-and-forward-declare-otherwise mantra. Possibly those includes have been a result of tools generating code and using the module headers to speed up initial development experience.

A patch to replace those QtDBus module header includes with includes of headers as just needed for the classes & namespace used was done. It turned out that the number of additional include statements needed afterwards was rather small, so no bigger costs there.

For a simple test on the real world effects, an otherwise idle developer system, with hot cache for the source files by previous runs, with SSD and old i5-2520M 2.5 GHz CPU, was used. For both variants the build dir would be cleaned by make clean and then a single job make run started, timed with the time tool, by time make. The results were this (repeated runs hinted those numbers are representative):

#include <QtDBus>#include <[headerasneeded]>
real (wall clock)18m51,032s14m6,925s
user17m58,326s13m22,964s
sys1m54,234s1m26,826s

So an overall build time reduction by around a 1/4 for a clean(ed) build.

Incremental builds during development should also gain, but not measured, just assumed. 🙂

Wait* on the code, to not wait on the build

(*as in waiter)

So in the spirit of the old Krazy includes check, consider to take a look at your codebase if not some Qt module header includes (QtCore, QtDBus, QtQml, QtGui, QtWidgets, QtNetwork, …) have sneaked in which might be simple to replace by “normal” includes.

Note: there is at least one tricky include with QtConcurrent, as that module shares the name with the main C++ namespace. So one might have used #include <QtConcurrent>, due to used-to Qt patterns and because the API documentation also tells to do. Just, that include gets one the module header, which then also pulls in #include <QtCore> with all its headers. Looking at the include directory of that module, one can find dedicated headers to use instead, like QtConcurrentRun. While many codebases e.g. in KDE projects rely on those, they still need to be also officially documented (QTBUG-114663).

In case one would like some more modern automation tool to check for the use of Qt module header includes, take a look at the current work to add a check to the static code analyzer Clazy.

PS: For those into riding their office chair into sword duels… you should have let me win more often? 😉

No Yes/No, yes?

How some evening supermarket shopping is triggering some API work…

Human Mind vs. Machine Mind

Some time ago I ran into a variant of a self-service checkout system in a supermarket which, when asking for applying the data collection identity card, used a dialog with the button options “Yes” & “No”. Being privacy-positive, my thoughts were, yes, I want to keep my data private, and was about to press the “Yes” button. Only to check once more and find that the question actually was “Do you want to use our card?”. Which made me wonder why in the year 2022 new systems are developed that apply that old pattern of “Yes” & “No” replies. And reminded me that also in newer software made in the KDE community I had seen new appearances of that scheme. Had it not been found to be inferior, from what I had seen by-passing in the HMI field?

What Does The Human Interface Guideline Say

Let’s see what in 2022 the guidelines for some prominent UI systems recommend for buttons in dialogs.

Apple’s Human Interface Guidelines for Alerts (dialogs) about text to use on buttons:

Aim for a one- or two-word title that describes the result of selecting the button. Prefer verbs and verb phrases that relate directly to the alert text — for example, “View All,” “Reply,” or “Ignore.” In informational alerts only, you can use “OK” for acceptance, avoiding “Yes” and “No.” Always use “Cancel” to title a button that cancels the alert’s action.

Google’s Material guidelines on behavior of Alert dialogs:

Don’t use action text that fails to indicate what the selection will do. “Cancel” and “Delete” better indicate what will occur in this dialog.
[ed.: comment on a “Don’t” example using “NO”/”YES” buttons]

Microsoft’s Fluent Design System guidelines on buttons of Dialog controls:

Use specific responses to the main instruction or content as button text. An example is, “Do you want to allow AppName to access your location?”, followed by “Allow” and “Block” buttons. Specific responses can be understood more quickly, resulting in efficient decision making.

And respective recommendations can be also found in guidelines of FLOSS projects:

Haiku’s Human Interface Guidelines hold for Alert Windows this:

Avoid Yes / No button labels. It is much better to use the name of the action in the label, such as Save Changes / Discard Changes. Only in very rare cases are Yes / No labels the best choice.

Also KDE’s Human Interface Guidelines state on Modal Message Dialog:

Buttons should clearly indicate the available options using action verbs (“Delete”, “Rename”, “Close”, “Accept”, etc.) and allow the user to make an informed decision even if they have not read the message text. Never use “Yes” and “No” as button titles.

And the GNOME Human Interface Guidelines recommend on Action dialogs:

Label the affirmative button with a specific imperative verb, for example: Save or Print. This is clearer than a generic label like OK or Done.

When looking at older guidelines, e.g. in the NeXTSTEP User Interface Guidelines from November 1993 in the section “Naming Buttons in an Attention Panel” can be read:

When naming buttons in an attention panel, you should label each one clearly with a verb or verb phrase describing the action it performs. The user shouldn’t have to read the text of the attention panel to be able to choose the right button. Thus, generic labels (like Yes and No) aren’t appropriate, as they tend to cause user errors.

And similar, to little surprise, the variant in the OpenStep User Interface Guidelines from September 1996 in its section “Naming the Buttons in an Attention Panel”:

Label each button clearly with a verb or verb phrase that describes its action. Users should be able to read the names of the buttons and choose the right one. They should not need to read other text on the panel. Avoid “generic” labels like Yes and No, because they are not clear and lead to user errors. Avoid using OK unless it is the only button in the attention panel.

So seems the authors of all the HIGs checked agree on avoiding Yes & No. But is that actually founded on data from science & research, or did they just copy from each other?

Backed by Research?

On a quick look I could not find related scientific research reports that could back up the guideline recommendations. But instead I came across research about the related field of designing questionnaires, on the topic of preventing errors in the given answers e.g. due to misunderstandings or lack of concentration. And that seemed to confirm that people gave more correct answers and also felt it simpler to do when the items representing the choice (e.g. a text next to a checkbox) themselves had clear unique references to the choice instead of being abstract items whose meaning only could be known by the assignment to a choice in the question itself. Abstract items being things like colors, shapes, positions, numbers or the very Yes & No.

Not seen discussed or even researched, but my theory would be that things are worse even when there is a memory effect and something could mean the opposite in other similar choices.

Own experience with soda machines or coffee machines would confirm that, less mistakes remembered when pushing a button with the image of the wanted drink on it over entering a number on a dial to express the selection. Even more when the motivation for a drink was temporary brain insufficiency 😉

(If a reader has some pointer to related public papers, happy to add here).

API-Driven UI

By personal experience a lot of software is produced patch-by-patch, feature-by-feature, idea-by-idea. Often by people who at most learned how to write syntax-conforming code. And to be efficient, typically things are developed using resources which are available, e.g. deploying existing standard libraries. Thus instead of UX engineers designing HIG conforming UI stories directing the implementation, as theory would suggest, it is the API and any documentation around it of existing UI components libraries.

Legacy API Rooting for Yes & No

One might remember more “Yes” & “No” buttons in older software. One reason might be that those texts (and their localized variants) need less horizontal space on the screen, something to consider for sure in low-dpi times. But then also still in hi-dpi times, when there are other constraints requiring very short texts to have it fit on the display.

Another reason was that it saves resources in the own software, if one just has to pass a flag to denote a set of predefined standard buttons with their texts and possibly icons instead of having to maintain and carry around all the data in the own executable and then pass it over at runtime. And such flag idea is supported by the API of legacy libraries.

The classic Microsoft Windows Win32 API provides a MessageBox function, to show a modal dialog:

int MessageBox(
  [in, optional] HWND    hWnd,
  [in, optional] LPCTSTR lpText,
  [in, optional] LPCTSTR lpCaption,
  [in]           UINT    uType
);

The argument uType is used with flags to define the buttons to use in the dialog, like MB_YESNO or MB_YESNOCANCEL. Those buttons have hard-coded text . The function return value indicates which button was pressed, like IDYES or IDNO.

This function signature and related definitions have been also carried over to other MS APIs, see e.g. .NET’s System.Windows.Forms.MessageBox with MessageBoxButtons and DialogResult. So still continuing to have developers write “Yes” & “No” options dialogs, despite some HIG from the same company (see above) recommending not to do that.

Borland’s (now Embarcadero) Visual Component Library (VCL) has been following the same spirit offering the function Vcl.Dialogs.MessageDlg:

function MessageDlg(
    const Msg: string;
    DlgType:   TMsgDlgType;
    Buttons:   TMsgDlgButtons;
    HelpCtx:   Longint
): Integer;

The Buttons argument takes flag values like mbYes and mbNo, while the matching returned value are defined by constants like mrYes or mrNo. Latest version 10.4 Sydney at least has an overload with an additional argument CustomButtonCaptions: array of string. But the flags and constants keep the code centered around the concepts of Yes and No.

Another classic UI library is Java’s Swing. It provides a class javax.swing.JOptionPane to use for standard dialog boxes. While the developer can add any custom UI components to the button row, the class itself provides prominently documented convenience API to use predefined button sets by optionType argument values like YES_NO_OPTION or YES_NO_CANCEL_OPTION. Using those flags saves resources needed to come up with own texts and any needed localization, so a developer has motivation to use those.

Then there is Qt. It has had the class QMessageBox (now part of Qt Widgets) providing a modal dialog. That has an enum with values like QMessageBox::Yes and QMessageBox::No, which is used in the class methods to reference predefined button as well as to return the user choice. Static convenience methods like QMessageBox::question() use Yes & No as default arguments, with no option to use custom buttons.

QMessageBox::StandardButton QMessageBox::question(
    QWidget *parent,
    const QString &title,
    const QString &text,
    QMessageBox::StandardButtons buttons = StandardButtons(Yes | No),
    QMessageBox::StandardButton defaultButton = NoButton
);

Custom buttons can be used when manually creating a QMessageBox instance, but the involved flags and constants also here keep the code centered around the concepts of Yes and No.

Again the API was carried over to own newer products, here the QtQuick API. The QML type QtQuick.Dialogs.MessageDialog has a property standardButtons, which takes a flag set for predefined buttons, .like QtQuick.Dialogs.StandardButton.Yes and QtQuick.Dialogs.StandardButton.No.

The sibling type from the QtQuick Controls 2, QtQuick.Controls.Dialog, also has a property standardButtons, which takes a flag set for predefined buttons, here QtQuick.Controls.Dialog.Yes or QtQuick.Controls.Dialog.No. With this type one can customize the button properties in the Component.onCompleted method, but as with QMessageBox the involved flags and constants also here keep the code centered around the concepts of Yes and No.

Looking further, Gtk also has had methods around a Gtk.MessageDialog, with the main instance creation function being:

GtkWidget*
gtk_message_dialog_new (
  GtkWindow* parent,
  GtkDialogFlags flags,
  GtkMessageType type,
  GtkButtonsType buttons,
  const char* message_format,
  ...
);

The argument buttons is used to reference predefined sets of buttons, e.g. GTK_BUTTONS_YES_NO. The dialog will emit a signal when the user chose a button, using values like GTK_RESPONSE_YES or GTK_RESPONSE_NO. One can also only add custom buttons with own text and ids. A note in the documentation for Gtk.ButtonsType hints at least that GTK_BUTTONS_YES_NO is discouraged by GNOME’s HIG.

Oh Yes: KDE Frameworks with Oh-Nos

The KDE Frameworks, a set of extensions around Qt, have quite some APIs designed decades ago. Among them are in the KMessageBox namespace convenience methods around message dialogs, for more feature-rich variants of the static methods of QMessageBox and reflecting the accepted state of art at the time. By the time they have grown into a large set, all encoding their specifics in the method names. But never got adjusted to meet the newer state of the art when it comes to recommended texts on the buttons, including KDE’s own HIG.

Examples are:

ButtonCode
questionYesNo(
    QWidget *parent,
    const QString &text,
    const QString &title = QString(),
    const KGuiItem &buttonYes = KStandardGuiItem::yes(),
    const KGuiItem &buttonNo = KStandardGuiItem::no(),
    const QString &dontAskAgainName = QString(),
    Options options = Notify
);
ButtonCode
questionYesNoCancel(
    QWidget *parent,
    const QString &text,
    const QString &title = QString(),
    const KGuiItem &buttonYes = KStandardGuiItem::yes(),
    const KGuiItem &buttonNo = KStandardGuiItem::no(),
    const KGuiItem &buttonCancel = KStandardGuiItem::cancel(),
    const QString &dontAskAgainName = QString(),
    Options options = Notify
);
ButtonCode
warningYesNo(
    QWidget *parent,
    const QString &text,
    const QString &title = QString(),
    const KGuiItem &buttonYes = KStandardGuiItem::yes(),
    const KGuiItem &buttonNo = KStandardGuiItem::no(),
    const QString &dontAskAgainName = QString(),
    Options options = Options(Notify|Dangerous)
);
ButtonCode
warningYesNoCancel(
    QWidget *parent,
    const QString &text,
    const QString &title = QString(),
    const KGuiItem &buttonYes = KStandardGuiItem::yes(),
    const KGuiItem &buttonNo = KStandardGuiItem::no(),
    const KGuiItem &buttonCancel = KStandardGuiItem::cancel(),
    const QString &dontAskAgainName = QString(),
    Options options = Notify
);

The return type ButtonCode being an ernum with values like Yes and No.

Recent API additions for some more asynchronous variants, though without convenient one-method-call code, by the class KMessageDialog continue the pattern. An instance can be only created by defining its type in the constructor method:

KMessageDialog::KMessageDialog(
    KMessageDialog::Type type,
    const QString &text,
    QWidget *parent = nullptr 
);

Where the argument of enum type KMessageDialog::Type has values like QuestionYesNo, QuestionYesNoCancel, WarningYesNo, or WarningYesNoCancel. To signal the user choice, the class reuses the enum type StandardButton from QDialogButtonBox , with values like QDialogButtonBox::Yes or QDialogButtonBox::No.

Searching the current sources of all KDE software using QWidgets technology for the UI, one can see all that API is heavily used. While many places have seen follow-up work to use custom, action-oriented texts for the buttons, as recommended by the KDE HIG, yet the code itself has to keep using the Yes and No semantics, being part of the API.

The spirit of this API can be again found in the message API available to the KIO workers to request interaction with the user by the front-end:

int KIO::WorkerBase::messageBox(
    const QString &text,
    MessageBoxType type,
    const QString &title = QString(),
    const QString &buttonYes = QString(),
    const QString &buttonNo = QString(),
    const QString &dontAskAgainName = QString() 
);

The argument of enum type MessageBoxType has values like QuestionYesNo, WarningYesNo, or WarningYesNoCancel. Similar patterns in the front-end interface classes.

KDE Frameworks’ QtQuick-based UI library Kirigami with its Kirigami.Dialog and Kirigami.PromptDialog copies the problems of Qt’s QtQuick.Controls.Dialog, having a property standardButtons taking a flag set for predefined buttons by values like QtQuick.Controls.Dialog.Yes or QtQuick.Controls.Dialog.No.

With all this API, and also not a single comment in its documentation about what the KDE HIG has to tell here, and a community reaching out explicitly also to non-educated developers, it is little surprise that even new code written in the KDE community these days uses the discouraged Yes/No dialog pattern.

HIG CPL KF API: TBD ASAP

With the upcoming KDE Frameworks 6 API series around the corner, it would be good to have some substitute API done before, so the HIG-conflicting API could be deprecated still for KF5. And KF6 would only hold API trapping developers into more HIG-conforming UIs.

Some, sadly non-exciting proposals to address this should appear in the next days, both as further blog posts as well as MR with actual code.

C++17’s {} impeding SC for new method overloads

Are you the C++ experienced reader to solve the following challenge?

Given a class C (edit: covered by binary compatibility needs) with the overloaded methods foo() and foo(A a):

    class C
    {
    public:
        void foo();
        void foo(A a);
    };

Now you want to add an overload, to serve further use-cases as requested by API consumers, and going for another overload matches your API guidelines :

        void foo(B b);

But there is existing consumer code, making use of C++17, which calls

    C c;
    c.foo({});

So the new overload will not be source-compatible and break existing code, due to the ambiguity whether {} should be turned into an instance of A or B.

What could be done about this?

The goals here would be:

  1. to enable API consumers to write code which works as if there are the two old and the one new overloads declared
  2. any calls using the {} expression as argument are resolved to a method which emits a compiler warning to avoid that ambiguous argument and which on any next compatibility breaking occasion can be simply removed

While asking around, some approaches have been proposed, but so far none could satisfy the second goal, catching existing ambiguous argument expressions to hint API consumers to adapt them.

Would you have an idea?

Edit: Note that we cannot change A or B (might be types/classes from elsewhere), and only can add new API to C, due to binary compatibility needs.

Edit: On a second thought, similar problems also exist before C++17 already when an argument has a type which can be implicitly converted both to A and B, by implicit constructors or type operator methods.

KF5’s big ramp of deprecations to KF6

A major part of the on-going preparations of version 6 of the KDE Frameworks is to see that the API breakage due to happen versus version 5 is mostly an API dropage. And software using KDE Frameworks 5 can already find and use the future-proof API in the version 5 series, next to the legacy one. So that the very hurdle to take on porting from KF5 to KF6 is minimal, ideally just changing “5” to “6”, once one has managed to get rid of using the legacy API. This also matches the approach taken by Qt for Qt5 & Qt6, so there is a common API consumer experience. And adding new API already in KF5 allows to field-test its practicability, so KF 6.0 starts with proven API.

Preparing the Ground

To direct developers to the future-proof API there are at least two places:

  • adding notes in the API documentation, to help those writing new code to avoid the outdated API
  • having the compiler emit warnings when building existing code using outdated API

C++ does not come with a native system for integrated tagging and conditional usage of deprecated API and build of its implementation or control over library and version specific emission of warnings. All there is are documentation tools like doxygen which require to have a separate tool-specific tag like @deprecated in the documentation comment, while in the C++ syntax since C++14 there is the [[deprecated(text)]] attribute available (before there were compiler specific attributes), where the API developer has to maintain both separately, and a global option for the compiler like -Wno-deprecated-declarations to not be bothered by any warnings.

For KDE Frameworks therefore almost 3 years ago (ECM/KF 5.64) the CMake utility ECMGenerateExportHeader was introduced: it generates next to the symbol visibility tagging C++ macros (“export macros”) also version-controlled macros for adding compiler deprecation warnings as well as for wrapping code blocks to be visible to the compiler (see blog post for more details). It just does not solve the need for duplication in the documentation comment sadly.

A typical deprecation using those macros looks in the declaration like this

#include <foo_export.h>
#if FOO_ENABLE_DEPRECATED_SINCE(5, 0)
/**
 * @deprecated Since 5.0. Use bar().
 */
FOO_EXPORT
FOO_DEPRECATED_VERSION(5, 0, "Use bar()")
void foo();
#endif

and in the implementation like this

#include "foo.h"
#if FOO_BUILD_DEPRECATED_SINCE(5, 0)
void foo()
{
}
#endif

Which by default will then have the compiler notify an API consumer in a build like this (note also the automatic version hint):

/.../fooconsumer.cpp:27:65: warning: ‘void foo()’ is deprecated: Since 5.0. Use bar() [-Wdeprecated-declarations]

See the guidelines how to deprecate API for further details.

Piling up

The deprecation macros available by that have since been massively deployed: as of the current development version of KF5 there are more than 1000 hits in the installed headers when grepping for deprecated methods and enumerators. So quite some things learned and changed since KF 5.0 in July 2014 (but also before, see below).

As the internal logic generated with ECMGenerateExportHeader requires to list specific version at which API is deprecated, this also gives easy access to interesting insight into the activity. Note also that the new macros allowed to properly deprecate API that was even considered outdated before KF5, but due to missing systematic support failed to be removed on next chance (especially in the KIO core library).

LibraryVersions with new deprecations
attica0.2 5.4 5.23
baloo5.55 5.69
bluez-qt5.57
karchive5.0 5.85
kauth (core)5.71
kauth (widgets)5.92
kbookmarks5.0 5.65 5.69
kcalendarcore5.64 5.89 5.91 5.95 5.96 5.97
kcmutils5.66 5.76 5.82 5.85 5.87 5.88 5.90
kcodecs5.5 5.56
kcompletion4.0 4.5 5.0 5.46 5.66 5.81 5.83
kconfig (core)4.0 5.0 5.24 5.42 5.82 5.89
kconfig (gui)5.11 5.39 5.71 5.82
kconfigwidgets4.0 5.0 5.23 5.32 5.38 5.39 5.64 5.78 5.80 5.82 5.83 5.84 5.85 5.90 5.93
kcontacts5.88 5.89 5.92
kcoreaddons4.0 5.0 5.2 5.65 5.67 5.70 5.72 5.75 5.76 5.78 5.79 5.80 5.84 5.86 5.87 5.88 5.89 5.92 5.95 5.97
kdbusaddons5.68
kdeclarative (kdeclarative)5.0 5.45 5.75 5.91 5.95
kdeclarative (quickaddons)5.88 5.93
kdesu5.0
kdnssd4.0
kfilemetadata5.50 5.60 5.76 5.82 5.89 5.91
kglobalaccel4.2 4.4 5.9 5.90
kglobalaccel (runtime)4.3 5.90
kholidays5.95
ki18n5.0
kiconthemes4.8 5.0 5.63 5.64 5.65 5.66 5.82
kidletime5.76
kio (core)3.0 3.1 3.4 4.0 4.3 4.5 4.6 5.0 5.2 5.8 5.24 5.45 5.48 5.63 5.61 5.64 5.65 5.66 5.69 5.72 5.78 5.79 5.80 5.81 5.82 5.83 5.84 5.86 5.88 5.90 5.91 5.94 5.96 5.97
kio (filewidgets)4.3 4.5 5.0 5.33 5.66 5.70 5.76 5.78 5.86 5.97
kio (kntlm)5.91
kio (widgets)4.0 4.1 4.3 4.4 4.5 4.6 4.7 5.0 5.4 5.6 5.25 5.31 5.32 5.64 5.66 5.71 5.75 5.76 5.79 5.80 5.82 5.83 5.84 5.86 5.87 5.88
kirigami5.80 5.86
kitemmodels4.8 5.65 5.80
kitemviews4.2 4.4 5.0 5.50
kjobwidgets5.79
knewstuff (core)5.31 5.36 5.53 5.71 5.74 5.77 5.83
knewstuff (qtquick)5.81
knewstuff (widgets)5.91
knewstuff5.29 5.76 5.77 5.78 5.79 5.80 5.82 5.85 5.91 5.94
knotifications5.67 5.75 5.76 5.79
kpackage5.21 5.84 5.85 5.86
kparts3.0 4.4 5.0 5.72 5.77 5.78 5.80 5.81 5.82 5.83 5.88 5.90
kquickcharts5.78
krunner5.28 5.71 5.72 5.73 5.76 5.77 5.79 5.81 5.82 5.85 5.86 5.88
kservice5.0 5.15 5.61 5.63 5.66 5.67 5.70 5.71 5.79 5.80 5.81 5.82 5.83 5.86 5.87 5.88 5.89 5.90
ktexteditor5.80
ktextwidgets5.0 5.65 5.70 5.71 5.81 5.83
kunitconversion5.91
kwallet5.72
kwayland (client)5.49 5.50 5.52 5.53 5.73 5.82
kwidgetsaddons5.0 5.13 5.63 5.65 5.72 5.77 5.78 5.85 5.86 5.97
kwindowsystem5.0 5.18 5.38 5.62 5.67 5.69 5.80 5.81 5.82
kxmlgui4.0 4.1 5.0 5.75 5.83 5.84
plasma-framework (plasma)5.6 5.19 5.28 5.30 5.36 5.46 5.67 5.77 5.78 5.81 5.83 5.85 5.86 5.88 5.94
plasma-framework (plasmaquick)5.12 5.25 5.36
prison5.69 5.72
solid5.0
sonnet (core)5.65
sonnet (ui)5.65
syntax-highlighting5.87
threadweaver5.0 5.80

On the QML side deprecations are not that simple (and also not my personal domain), so that is not covered here.

On an expected question: It will be ready, when it is ready. Please join the effort if you can. Find more info e.g. in Volker’s more recent update.

Okteta making a small step to Qt6

Old, but stable, even more in when it comes to the feature set, and still getting its polishing now and then: your simple editor for the raw data of files, named Okteta.

What started in 2003 as a hex editing widget library for KDE3 (and Qt3), of course named KHexEdit (to be confused with the unrelated hex editor program that was part of KDE at that time), it turned into a first dedicated application by the title Okteta during the years 2006 to 2008 for KDE4 (and Qt4). From there on a small set of features was added once in a while, most impressively Alexander Richardson’s Structures tool in 2010,. Until then in 2013 the port to Qt5/KF5 was done (also to a good degree by Alexander). After that things had settled, the program working properly when needed, otherwise just left in the corner of the storage.

Now, nearly 2 decades after the first lines were written, the next port is to be done, to Qt6 and KF6. And this time the actual port is just amazingly boring: changing a few “Qt5” to “Qt6” in the buildsystem (and later some “KF5” to “KF6” once KF6 is ready), adding Qt6::Core5Compat as helper library for 1-2 classes that had not yet been substituted, adding a “const” to the argument of an overridden virtual method, replacing some forward declarations with includes to have all signal and slot argument types fully declared, adapting some “QStringList” forward declarations, a few more explicit constructor calls for type conversions… and done.

It’s even hard to spot a difference (Qt5 above, Qt6 below), just some margins are done differently by the style code right now:

Okteta 0.26.8 running on Qt5.15 & KF 5.94
Okteta (local work branch) running on Qt6.3 and pre-KF6

Well, the story has a dark spot though: the Structures tool is missing from the port for now. Because it uses an old JavaScript engine which is gone in Qt6 finally (QScript), and so far no-one has completed the port to an other JavaScript engine, .like the one part of QML.

So while the good people working on preparing KF6 are taking their time still as needed, there is despite this initial happy result a building block to finish for Okteta as well, to not suffer in this current port. And instead having it get even older and still stable 🙂

The thing to highlight here is: how all the preparation work done by the Qt and KF developers, when followed in due time on the consumer side by taking care of all the things marked deprecated in favour of the substitutes, pays out and has been worth the investment IMHO. No deep and wild waters to cross to the new version continent, just a small jump. Hopefully also for your software.

Randa Meetings 2016 Part III: Translation System Troubles

[[Sic, 2016. This post about the results of studying the situation with the translation systems in software by KDE during the Randa event in 2016 had been a draft all the time due to being complicated matter and partly only lamenting about broken things instead of reporting world improvements. Still the lamenting might provide information unknown to some, so time to simply dump the still valid bits (quite some bits happily have been fixed since and are omitted, same for those notes I could no longer make sense off), even without a proper story thread. So if you ever wondered about the magic happening behind the scenes to get translations to appear in software based on Qt & KF, sit down and read on.]]

IMHO new technology should adapt by default to our cultures. Not our cultures to technology, especially if the technology is enforced on us by law or other pressures. Surely technology should instead allow to enhance cultures, extending options at best. But never limiting or disabling.

One motivation to create technology like Free/Libre Software is the idea that every human should be able to take part in the (world) society. A challenge here is to enrich life for people, not to just make it more complex. Also should it not force them into giving up parts of culture for the access to new technology. Code is becoming the new law maker: if your existing cultural artifacts are not mapped in the ontology of the computer systems, it does not exist by their definition. One has to be happy already then if there is at least any “Other” option to file away with.

Sure, e.g. for producers targeting more than their local home culture it would be so nice-because-simple if everyone would use the same language and other cultural artifacts (think measurement units). But who is to decide what should be the norm and how should they know what is best?

When it comes to software, the so called “Internationalization” technologies are here to help us in the humanity being. Adding variability to the user interface to allow adaption to the respective user’s culture (so called “Localization”).
Just, for some little irony, there is also more than one “culture” with Internationalization technologies. And getting those into synchronized cooperation is another challenge, sadly one which is currently not completely mastered when it comes to the software by the KDE community.

Multiple translation systems in one application

Gettext

On Linux (and surely other *nixoid systems) traditionally gettext is used. Whose translation lookup code is either part of glibc or in a separate LibIntl. For a running executable the localization to choose is controlled by the environment variables “LANGUAGE” (GNU gettext specific), “LC_*” and “LANG” (see GNU gettext utilities documentation). Strings to be translated are grouped in so-called domains. There is one file per domain and per language, a so called “catalog”. A catalog appears in two variants, in the “Portable Object” (PO) format intended for direct editing by humans and in the “Machine Object” (MO) format intended for processing by software. For each domain optionally a separate directory can be set below which all the language catalogs belonging to that domain can be found.
On the call to the gettext API like dgettext("domain", msgid) when “LANG” is set to the id “locale” (and the other vars are not set), the translation will be taken from the file in the sub-path locale/LC_MESSAGES/domain.mo (or some less specific variant of the id “locale” until there is such a file) in the given directory for the domain.
So a library using gettext for translations has to install the catalog files in a certain directory at deploy time and, unless using the default, at execution start have that one registered for their domain (using bindtextdomain(...)). For locating and using the catalogs at run-time, an executable linking to such a library has nothing else to do to assist the library with that. And the same for the setup of translations in the code of the program itself with gettext.

Qt’s QTranslator

Qt uses another approach: one registers a set of handlers of type QTranslator which are queried one after the other if they can resolve the string to be translated. This is done by the central equivalent to the gettext API, QCoreApplication::translate(const char *context, const char *sourceText, const char *disambiguation, int n), which also if no handler could resolve the string simply returns the same string as passed in. That method is invoked indirectly from the tr(...) calls, which are class methods added to QObject subclasses via the Q_OBJECT macro, using the class name as the context.
With the Qt way, usually the caller of the translation invocation has to know which locale should be used and make sure the proper catalog handlers are registered, before doing that invocation. The Qt libraries themselves do not do those registration, it is the duty of the application linking to the Qt libraries to do that.

The Qt translation approach is not only used by all the Qt modules, but also many tier 1 modules of the KDE Frameworks. Because the KDE Frameworks module KI18n, which provides a convenience & utilitiy wrapper around gettext, is not available to them, being a tier 1 module itself.

Automagic setup of QTranslator-based translations

The classical application from the KDE spheres is traditionally developed with gettext and KI18n in mind, and thus not used to care for that registration of Qt translation handlers. To allow them staying that innocent, all libraries done by KDE using the Qt translation approach will trigger the creation and registration of the handler with their catalog themselves during loading of the library, picking a catalog matching current QLocale::system(). They are using the hook Q_COREAPP_STARTUP_FUNCTION, which evaluates to code for the definition of a global static instance of a custom structure whose constructor, invoked then after library load due to being global static instance, registers that function as startup function for the QCoreApplication (or subclass) instance or, if such instance already exists, directly calls the function. To spare the libraries’ authors writing the respective automatic loading code, KDE’s Extra CMake Modules provides the module ECMPoQmTools to have that code generated and added to the library build, by the CMake macro ecm_create_qm_loader(...).

One issue: currently the documentation of ECMPoQmTools misses to hint that generation of the handler is ensured to be done only in the main thread. In case the library is loaded in another thread, the generation code is triggered (and thus delayed) in the main thread via a timer event. This can result in race condition if other code run after loading the library in the other thread already relies on the translation handler present.

KI18n: doing automagic setup for Qt libraries translations even

The thoughtful reader may now wonder, given that KDE Frameworks modules using the Qt translation system are doing that by help of automatic loading of catalogs, whether something similar is valid for the Qt libraries itself when it comes to programs from KDE. The answer is: it depends 🙂
If the program links to the KDE Frameworks module KI18n, directly or indirectly and thus loads it, that library has code using Q_COREAPP_STARTUP_FUNCTION as well to automatically trigger the creation and deploy of the handler of the translations for the Qt libraries (see src/main.cpp). For which Qt libraries that is, see below. Otherwise, as explained before, the program has to do it explicitly.

So this is why the developers of the typical application done in the KDE community do not have to write explicit code to initiate any loading of translation catalogs.

Does it blend?

Just, the above also means that still there are two separate translation systems with different principles and rules in the same application process (if not more from 3rd-party libraries, which though usually use gettext). And that brings a set of issues, like potentially resulting in inconsistently localized UI due to different libraries having different set of localizations available or following different environment variables or internal flags to decide which localization to use (also for things like number formatting). And add to that having different teams with different guidelines doing the translations for the different libraries and programs from different organizations.

KI18n: too helpful with the Qt libraries translations sometimes

The automatic generation and deployment of the handler for the translations of Qt libraries when the KI18n library is linked to and thus loaded (as described above) is not really expected in Qt-only, not-KF-using programs. Yet, when such programs are loading plugins linking directly or indirectly KI18n, these programs will be confronted of getting the KI18n-generated handler deployed on top (and thus overriding any previously installed handler from the program itself). At best this means only duplicated handlers, but it can also mean changing the locale, as the KI18n code picks the catalog locale to use from what is QLocale::system() at the time of being run.

And such plugin can simply be the Qt platform integration plugin, which in the case of the Plasma platform integration has KI18n in the set of linked and thus loaded libraries. This issue is currently reported indirectly via Bug 215837 – Nativ KDE QFileDialog changes translation

When it comes to the Qt platform integration plugin, the use of Q_COREAPP_STARTUP_FUNCTION when being invoked via such a plugin also shows some issue in the design of the Qt startup phase, resulting in the registered function being 2x called, in this case resulting in duplicated creation of translation handlers (reported as QTBUG-54479)

Qt5: no longer one single catalog for all Qt modules

Seems in Qt4 times there was one single catalog per language for all that made up the Qt framework. In Qt5 this no longer is true though, for each language now a separate catalog file is used per Qt module. There is some backward compatibility though which has hidden this for most eyes so far, so called meta catalogs (see Linguist docs). The meta catalog qt_ does not have translations itself, but links to the catalogs qtbase_, qtscript_, qtquick1_, qtmultimedia_ and qtxmlpatterns_ (see yourself and open /usr/share/qt5/translations/qt_ll.qm, with ll your language code, e.g. de, in your favorite hex editor).

So applications which use further Qt modules, directly or indirectly, need to make sure themselves to get the respective catalogs loaded and used. Which gets complicated for those used indirectly (via plugins or indirectly linked as implementation detail of another non-Qt lib). There seems no way to know what catalogs are loaded already.

This might be important for programs from KDE using QtQuick Controls, given there exist qtquickcontrols2_*.qm files with some strings. Yet to be investigated if those catalogs are loaded via some QML mechanism perhaps, or if some handling is needed?

Juggling with catalogs on release time

The catalogs with the string translations for KDE software are maintained and developed by the translators in a database separate from the actual sources, a subversion system, partially for historic reasons.
When doing a release of KDE software, the scripts used to generate the source tarballs then do both a checkout of the sources to package as well as iterating over the translation database to download and add the matching catalogs.
KDE Frameworks extends this scheme by adding the snapshot of the translations in a commit to the source repository, using a tagged git commit off the main branch.

Issues seen:

  • which catalogs to fetch exactly based on fragile system, not exactly defined
  • which version of the database the fetched catalogs are from is not noted, tarball reproducibility from VCS not easily possible (solved for KF)
  • script accidentally added catalog files used internally by KDE translation system for in-source translations (fixed meanwhile)
  • script accidentally added by-product of translation statistic (fixed meanwhile)

Lifting some of the mystery around QT_MOC_COMPAT

((Dumping here the info collected as reminder to self, but also everyone who might wonder and search the internet. If you know a proper place to put it, please copy it there.))

When working on adding macros to control warnings by & visibility to the compiler for deprecated API in the KDE Frameworks modules, a certain C++ preprocessor macro has been found in some places in the code: QT_MOC_COMPAT. This macro is found as annotation to signals or slots which are otherwise tagged as deprecated.

Yet, searching in the Qt documentation (both website & local docs) has not yield any hits. More, grepping the headers of Qt libraries itself does not yield any hits, besides a blank definition of the macro in qglobal.h:

/* moc compats (signals/slots) */
#ifndef QT_MOC_COMPAT
#  define QT_MOC_COMPAT
#else
#  undef QT_MOC_COMPAT
#  define QT_MOC_COMPAT
#endif

So, what has it been there for?

Looking at the code generated by moc, one can see that QT_MOC_COMPAT gets reflected in some flags being set on the metadata about the signals and slots methods. See by the example of KActionCollection (as found in the build directory src/KF5XmlGui_autogen/include/moc_kactioncollection.cpp, note the MethodCompatibility comments):

// ...
static const uint qt_meta_data_KActionCollection[] = {

 // content:
       8,       // revision
       0,       // classname
       0,    0, // classinfo
      12,   14, // methods
       2,  108, // properties
       0,    0, // enums/sets
       0,    0, // constructors
       0,       // flags
       5,       // signalCount

 // signals: name, argc, parameters, tag, flags
       1,    1,   74,    2, 0x06 /* Public */,
       5,    1,   77,    2, 0x16 /* Public | MethodCompatibility */,
       6,    1,   80,    2, 0x16 /* Public | MethodCompatibility */,
       7,    1,   83,    2, 0x06 /* Public */,
       8,    1,   86,    2, 0x06 /* Public */,

 // slots: name, argc, parameters, tag, flags
       9,    0,   89,    2, 0x09 /* Protected */,
      10,    0,   90,    2, 0x19 /* Protected | MethodCompatibility */,
// ...

Those flags reflect enums defined in qmetaobject_p.h:

enum MethodFlags  {
    // ...
    MethodCompatibility = 0x10,
    MethodCloned = 0x20,
    MethodScriptable = 0x40,
    MethodRevisioned = 0x80
};

So, what makes use of this flag?

At first it seems nothing does. Looking some more, one can though discover that the flag is being brought back into the game via some bitshifting and being mapped onto another of set of flags (qmetaobject.h & qmetaobject.cpp):

class Q_CORE_EXPORT QMetaMethod
{
public:
    // ...
    enum Attributes { Compatibility = 0x1, Cloned = 0x2, Scriptable = 0x4 };
    int attributes() const;
    // ...
};

int QMetaMethod::attributes() const
{
    if (!mobj)
        return false;
    return ((mobj->d.data[handle + 4])>>4);
}

The very flag QMetaMethod::Compatibility is then checked for in debug builds of Qt during QObject::connect(...) calls, which invokes in such builds the following code to generate runtime warnings in the log:

#ifndef QT_NO_DEBUG
static inline void check_and_warn_compat(const QMetaObject *sender, const QMetaMethod &signal,
                                         const QMetaObject *receiver, const QMetaMethod &method)
{
    if (signal.attributes() & QMetaMethod::Compatibility) {
        if (!(method.attributes() & QMetaMethod::Compatibility))
            qWarning("QObject::connect: Connecting from COMPAT signal (%s::%s)",
                     sender->className(), signal.methodSignature().constData());
    } else if ((method.attributes() & QMetaMethod::Compatibility) &&
               method.methodType() == QMetaMethod::Signal) {
        qWarning("QObject::connect: Connecting from %s::%s to COMPAT slot (%s::%s)",
                 sender->className(), signal.methodSignature().constData(),
                 receiver->className(), method.methodSignature().constData());
    }
}
#endif

Chance is that QT_MOC_COMPAT is a left-over from string-based signal/slot connection times. Where now using the method-function-pointer-based signal/slot connects catches connections with deprecated signals or slots at build time, as opposed to the runtime-only approach of the earlier, which also comes at runtime costs and thus is only available in debug builds of Qt.

Perhaps some Qt contributor reading this can shed more light on this and about its future, presence & past 🙂

More control over warnings for and visibility of deprecated library API via generated export macro header

Or: More preparation for the autumn of version 5 of KDE Frameworks

Consumer and producer interest in legacy in a library API

During the development life-time of a library in a API-compatible major version, quite some part of its API can be found to be insufficient and thus be deprecated in favour of API additions that serve the purpose better. And the producers of the library want to make the consumers of the library aware about those changes, both to make the investment into the improved API pay out for the consumers as well as prepare them for the future removal of the old API. As a deprecation note in the API documentation or release notes might be missed for existing consumer software, the compiler is pulled into the game using deprecation attributes on the API symbols, so developers looking at the build log have a chance to automatically be pointed to use of deprecated API in their software.
Next, once the chance for dropping the legacy parts of an API arrives on a change to a new major version, again having the compiler support pointing out that part is easier then grepping over the codebase for potentially pattern-less informal notes in code comments or the documentation.

At the same time consumers of a library might be well aware about API deprecations. But they might want to support a range of released versions of the library, using a single code base without variants for the different library versions. They might have more important issues to solve than deprecated API and want to get rid of any deprecation warnings at all. Or they want to ensure that no new uses of deprecated API is added.
Others consumers again might want to use custom builds of the library with all the implementation for deprecated API dropped, at least until a certain version, to save size when bundling the library product.

KDE Frameworks: … TODO

KDE Frameworks, the continuation of the “kdelibs” bundle of libraries, but with emphasis on modularization, is now at API-compatible major version 5. Yet one can find legacy API already deprecated in version 3 times, but done so only as comment in the API dox, without support by the compiler. And while lots of API is also properly marked as deprecated to the compiler, the consumer has no KDE Frameworks specific option to control the warnings and visibility. While some “*_NO_DEPRECATED” macros are used, they are not consistently used and usually only for deprecations done at version 5.0.

As you surely are aware, currently the foundations of the next generation of Qt, version 6, are sketched, and with the end of 2020 there even exists a rough date planned for its initial release. Given the API breakage then happening the same can also be expected for the libraries part of KDE Frameworks. And which would be a good time to also get rid of any legacy cruft.

New: ECMGenerateExportHeader, enabling more control about deprecated API

A proposed new addition to KDE’s Extra CMake Modules (ECM) should allow to improve the situation with KDE Frameworks, but also other libraries: ECMGenerateExportHeader (review request).

It would generate an extended export macro definition header which also includes macros to control which parts of the deprecated are warned about, are visible to the compiler for library consumers or included in the build of the library at all. The macros would be inspired by similar macros introduced with Qt 5.13, so the mind model can be transferred.
(Inspired, but not a plain copy, as e.g. the name “QT_DISABLE_DEPRECATED_BEFORE” is a bit misleading, as the macro is specified to work as “before and including”, so the proposed inspired name uses “*_DISABLE_DEPRECATED_BEFORE_AND_AT”.)

More elaborated example usage would be like this (note the difference between FOO_BUILD_DEPRECATED_SINCE and OO_ENABLE_DEPRECATED_SINCE, see documentation for explanation):

CMakeLists.txt:

include(ECMGenerateExportHeader)

set(EXCLUDE_DEPRECATED_BEFORE_AND_AT 0 CACHE STRING "Control what part of deprecated API is excluded from build [default=0].")

ecm_generate_export_header(foo
    VERSION ${FOO_VERSION}
    EXCLUDE_DEPRECATED_BEFORE_AND_AT ${EXCLUDE_DEPRECATED_BEFORE_AND_AT}
    DEPRECATION_VERSIONS 5.0 5.12
)

Installed header foo.hpp:

#include <foo_export.h>

enum Bars {
    One,
#if FOO_BUILD_DEPRECATED_SINCE(5, 0)
    Two,
#endif
    Three,
};

#if FOO_ENABLE_DEPRECATED_SINCE(5, 0)
/**
  * @deprecated Since 5.0
  */
FOO_DEPRECATED_VERSION(5, 0)
void doFoo();
#endif

#if FOO_ENABLE_DEPRECATED_SINCE(5, 12)
/**
  * @deprecated Since 5.12
  */
FOO_DEPRECATED_VERSION(5, 12)
FOO_EXPORT void doBar();
#endif

class Foo {
#if FOO_BUILD_DEPRECATED_SINCE(5, 12)
  /**
    * @deprecated Since 5.12
    */
  FOO_DEPRECATED_VERSION(5, 12)
  virtual void doFoo();
#endif

Source file foo.cpp:

#include "foo.hpp"

#if FOO_BUILD_DEPRECATED_SINCE(5, 0)
void doFoo()
{
    // [...]
}
#endif

#if FOO_BUILD_DEPRECATED_SINCE(5, 12)
void doBar()
{
    // [...]
}
#endif

#if FOO_BUILD_DEPRECATED_SINCE(5, 12)
void Foo::doFoo()
{
    // [...]
}
#endif

Other, better approaches?

The author is not aware of other approaches currently, but would be happy to learn about, to compare, improve, or even discard in favour of another the proposed approach.

Please also have a look at the documentation for the proposed CMake macro ECMGenerateExportHeader (review request) and tell your thoughts.

For this macro being applied, see a patch for KCoreAddons and a patch for KService.

Please test improved Plasma Theme switching for Plasma 5.16

Help testing the Plasma Theme switching now

You like Plasma Themes? You design Plasma Themes even yourself? You want to see switching Plasma Themes working correctly, especially for Plasma panels?

Please get one of the Live images with latest code from the Plasma developers hands (or if you build manually yourself from master branches, last night’s code should be fine) and give the switching of Plasma Themes a good test, so we can be sure things will work as expected on arrival of Plasma 5.16:

If you find glitches, please report them here in the comments, or better on the #plasma IRC channel.

Plasma Themes, to make it more your Plasma

One of the things which makes Plasma so attractive is the officially supported option to customize also the style, and that beyond colors and wallpaper, to allow users to personalize the look to their likes. And designers have picked up on that and did a good set of custom designs (store.kde.org lists at the time of writing 470 themes).

And while in the last Plasma versions some regressions in the Theme support had sneaked in, because most people & developers are happily using the default Breeze theme, with Plasma 5.16 some theming fixes are to arrive.

Plasma Theme looking good on first contact with Plasma 5.16

The most annoying pain point has been that on selecting and applying a new Plasma Theme, the theme data was not correctly picked up especially by Plasma panels. Only after a restart of Plasma could the theme be fully experienced. Which made quick testing of themes e.g. from store.kde.org a sad story, given most themes looked a bit broken. And without knowing more one would think it is the theme’s fault.

But some evenings & nights have been spent to hunt down the reasons, and it seems they all have been found and corrected. So when one clicks the “Apply” button, the Plasma Theme instantly styles your Plasma desktop as it should, especially the panels.

And dressing up your desktop to match your day or week or mood or your shirt with one of those partially excellent themes is only a matter of some mouse clicks. No more restart needed 🙂

Coming to your system with Plasma 5.16 and KDE Frameworks 5.59 (both to be released in June).

Make your Plasma Theme take advantage of Plasma 5.16

Theme designers, please study the recently added page on techbase.kde.org about Porting Themes to latest Plasma 5. It lists the changes known so far and what to do for them. Please extend the list if something is yet missing.
And tell your theme designer friends about this page, so they can improve their themes as well.