Archive for the 'KDE' Category

Akademy-triggered random thoughts about the future

I’m writing this text during a flight from Helsinki to Brussels on my way home from Akademy. I had a good and quite interesting time in there and I can say that I’m proud to be a member of the vibrant community called KDE.

When I was a child I wanted to be an inventor, and I was always thinking about new inventions, doing experiments and discovering things on my own. Then I discovered that with a single computer you can make happen any idea you have in your mind and convert it into reality, and that’s how at an early age I started to develop software.

KDE is all about ideas, about innovation, so I feel at home in here because people are open minded and like to think about future. I had lots of discussions about these topics: how is the future of KDE could or should be, where are we heading, what is the current situation and also brainstorming sessions about how far can technology go and how free software could anticipate this.

Currently and in the short-mid term, the biggest struggle of the KDE community and the free software in general are privative web services. Richard Stallman has warned us about this problem for a some time now, and I think it’s definitely true. This Akademy I have seen lots of people using web services like Google docs, Gmail for emails and chattting, facebook, linkedin, spotify, grooveshark, etc. I’m not pointing fingers here: this is just an increasing reality and I am part of those too.

We need to accept and agree that web services are here to stay and what we should do is to try to, first integrate them as much as possible in our ecosystem and to promote free software web services like owncloud, because it’s our only chance to remain relevant and not lose too much the field. We at Wadobo are ready to install it in our own server for example. The Free Software movement is already part of the history of Internet, and if we want to remain being part of it we need to adapt to the changes.

In a not so distant future, about three to five years, things will probably start to change, and I argue that we are going to see an invasion of Internet into reality. Yes, I’m thinking about immersive reality and gadgets like Google glasses. My friend and also Wadobo fellow Dani had a crazy brainstorming about this topic during Akademy and I think we ended up both convinced about the very broad possibilities that this new Internet bring to us.

It’s a really interesting exercise to think about the possibilities that devices like Google glasses bring, how they can enter into our life, and the implications and repercussions of all that. Basically the idea is that will have some a pair of glasses that also act as computer display, showing information over reality at all times. As simple and powerful as that.

We have seen that kind of stuff in science fiction: Dragon Ball Z, Terminator, and many more. The key thing is to make the glasses to correctly show contextual information on the go, transparently, pervasively when you actually need it and with having to do anything, just pointing your eyes/head which is something that you already do all the time in your life.

The main focus of this technology is to be a passive contextual interface. This is the business model of Google ads: they appear here and there in web pages showing related information to you. To achieve that the glasses will analyze the information sources it has (geolocalization via GPS/wireless signals, the camera pointing to what you see, other near devices, etc) to analyze the current situation in a rapid manner. Google is an information company, and if they succeed they will suddenly have so much more information about us that it will be quite scary:

    • They are already doing some steps in this direction. For example, the automatic subtitles service in Youtube could probably be applied in the future for real time translated subtitles and all people could understand each other no matter what language they use.
    • Also the advertisements business model will jump to reality: as an advertiser you could for example be able to superpose your ad about your shiny new car over old cars, or blank empty spaces. Talk about contextual and immersive advertisement.
    • The cards we use with our name and nickname on it in events like akademy won’t be needed any more: that information will be over-layered in your t shirt, or above your head. These kind of applications will probably appear in two ways: to be used in reduced contexts, and also as general purpose social networks with support for groups, etc. The next new facebook will definitely be born in here. This will take social networks to the next level. Parents will love to be able to as a glance always know where their children are, because their position will have appear to them superposed in the glasses as an indicator. This will probably also happen when you speak with someone, along with information about the call (time, cost, etc).
    • These glasses also will be regarded by ecologists as a very good invention, because you won’t need to buy any screen at all anymore. No need for screen at your workstation at work, another for your laptop, another for TV, no need to any kind of projector for presentations or at the cinema. It will be much more cost effective also. Big screen makers should be worried, their business model is going to change dramatically to the small ones and many companies will suffer this transition. Wearable computing will make our whole current computer experience change: TV, mobile phones, tablets, ebooks, workstations, laptops all will be affected. Maybe the people in the future will think about our current era as something clunky and unhealthy, where to use the computer you needed to use all these big and crazy devices, the same way we now think about cities of early 19th century as very unhealthy, filled with dirty things, smoke and non-efficient.
    • People will have their life recorded and this will bring webcams to whole new level. People will share their life on the net on live and they will have it stored completely. You will be able to attend to Akademy and do a guided tour in Tallinn through the glasses of David Faure for example. You will be able to go through a timeline of your whole life, zoom in and revive the experience you had.

This will also affect to transparency and behavior in public. Car accidents will usually be recorded from the point of view of different people and it will be much easier to judge what really happened. There will probably be a site where people will tag illegal behavior (robbery, bad parking in zebra crossings etc) and this of course will affect how people behave in public. You will also be able to easily report if traffic lights are not working and providing proof of it.

  • Services like Google street view will be also revamped: they will instantly be updated with the images of everyone’ glasses. The whole street view will be available everywhere, in 3d, and instantly updated. In the end this will allow you not only to see the world in the eyes of someone else, but to virtually move around any place in the world in real time.


And this is a small list of what currently our imagination let us reach. Some of the things written here might never happen, some will happen sooner than others, and many new original ideas did not enter the list. Most of these things won’t be possible with the first immersive reality glasses available either. But you can see now this that we are talking about is something new and interesting, which most probably will get it’s own name. For me, it’s the Realnet, because of how close is this network to reality.

Akademy-es 2012 – I go too!

Just a quick note, I’m about to take the plane from Seville to Zaragoza together with my collegues at Wadobo to attend to Akademy-es 2012

Yay! See you there folks.

PD: We’ll miss you this time Antonio Larrosa!

The Server in the middle problem and solution

At the beginning of 2009 I posted a story in this very same blog where I proposed the creation of a web encryption framework to add support for end to end encryption in the web browser.

As it turns out, in 2010 I did my Computer Science final project about this subject, successfully implementing an HTML extension in KHTML that fixed what during one talk I was giving on the subject I coined to be the Server in the middle problem (project memory in spanish, code in github).

The solution I developed is just a proof of concept and not a final proposition. It basically extends the div element and input type=text element by adding two attributes, encryption=”gpg” and encryption-key=”<keyid>”, so that a div containing a gpg ascii encoded encrypted text is automatically decrypted shown the plaintext, and when a form with an encrypted input type=”text” element is sent, the data is automatically encrypted.

The key point about this proposition is twofold:

  1. The security it provides is website-independent, you only need to trust the web browser.. It’s not possible using javascript, the DOM, CSS or any other tactics to access to the contents of the plaintext.
  2. It’s easy to implement as an HTML extension that  could be standardized.

This is what makes the proposal differ from other approaches like the deceased FirePGP Mozilla Firefox extension, the javascript API that some web browsers like Mozilla Firefox provide for cryptographic primitives, or javascript libraries like Slow AES that currently provide cryptographic support for some websites that uses them.

I have spent this weekend updating the code to work in recent KDE versions and creating an ready-to-go usb live opensuse-based [1] appliance called “Server in the middle” that shows Sweeetter, a microblogging application that allows users to exchange messages using the aforementioned HTML extension. Check it out! it’s for you to test it =)

SSL was the first step for securing the web. End to end encryption is the next, and it will allow cloud applications like web chats, web mails, and even office apps in the cloud with privacy being orthogonal to the servers used. When you chat using gmail with your peers using HTTPS, there’s someone else listening: it’s Google. It’s the server that is in the middle by design. When a Los Angeles employee sends an email to another peer using Gmail, it ain’t Google’s business.  Perhaps if an end-to-end encryption scheme as proposed was available as a standard, Google would offer it in Gmail for businesses and for geeks like us ;-), and certainly other service providers would.

I will say it again: what I proposed is just a proof of concept. We could perhaps encrypt whole forms in a similar manner, or  use a secure sandbox inside which plaintext data can be freely manipulated, but whose details are well known to the browser, are being recorded and the user can see in a details dialog, similar to the details of HTTPS connections in current web browsers, and the data going out from the sandbox is encrypted in a controlled and secure way.

The issue of privacy in the web will arise sooner or later. All the applications are jumping to the web wagon and some applications just need to be secure. It’s not only cryptonerds that need to take this seriously. Big companies that need their data to be secure will either continue using old-fashioned software, or request something better than we have now. This has already happened sooner than you might think (1999). Thus, we need to take this seriously, and begin standardizing and developing similar solutions to the one proposed for the shiny future that is about to come.

[1] BTW, SUSE Studio rocks!

QMLPart: enabling a declarative web

As some of you already know, I started last year with some university friend Wadobo, a spanish-based free software startup. Lately we’ve been working with QML and now that I know it, it’s really nice. Some weeks ago I was with danigm (also from Wadobo) and we were talking about how nice it is to do mobile applications in QML, and how web applications are taking over desktop applications more and more. Do you know GTK already has an HTML/JS frontend? Demo here. BTW, Gnome guys have been working on something similar to QML too, ClutterScript.

And then the idea came out of the blue. QML could be (in my dreams 😉 the successor of HTML. I know, sounds crazy doesn’t it? That’s because it is crazy. So anyway, I wanted to test the concept, and today I coded this litle KPart called QmlPart that simply loads a .qml file into the KPart. The code is really minimal and it works beautifully, both in Konqueror and in rekonq, and loads fine both local and remote QML files and remote files, even if the QML application is divided in multiple files.

I am really surprised how easy it was to do this KPart. I uploaded to wadobo’s web server the Qt Declarative examples and most of them work out of the box with QmlPart. Here is a screencast:

Why would we want to make the web declarative? Because QML is much more powerful than HTML/CSS. Those of you that know QML will probably agree. HTML doesn’t even have a proper way to create column or row layouts. All kind of animations, transitions, states are easy-peasy in QML. And its object orientation is quite powerful, as they have properties, inheritance, etc. The web of 2011 can be much easily done in QML than in current web technologies.

What would be the path of action? This is just a proof of concept, and it might end here. Or it might not. If we get serious, we would first try to agree in a standard for a declarative web (talk to Gnome/ClutterScript guys), work in including this standard in web browsers, create a library in Javascript that allows loading declarative files as HTML/JS as a fallback for browsers without native declarative web support, and that way we can attract people to the platform, and then we would have something.

Either way, I had my fun today with this little kpart and all those crazy ideas around it. Mission accomplished.

Cloud development

As always, it’s been a while since I wrote something here. Anyway, I’m going to present here a new concept:  cloud development. First of all I must say: the people of Cloud9 IDE (Mozilla bespin/skywriter has been merged into that) are already working on something similar to my idea, but not quite the same.

The idea behind Cloud9 IDE is being able to develop javascript collaboratively  using only the web browser. It even supports debugging, but that’s because the browser actually can run javascript code itself. They plan to support syntax highlighting for more languages. They of course have a chat for those connected to the ide, and changes occur live to the code for everyone connected at the same time.

Gobby/Kobby are native (gnome, kde) collaborative editors which use the infinote protocol for collaborative editing. So you connect to a infinote server, and then you can see the files in that session, edit them, add/remove files, and chat with your colleagues.

None of those solutions provide real general purpose “cloud development” for me.  These are the features I¡m looking for:

  • The server side is a bunch of machines in a cloud configuration which provide an API.
  • The server side provides compilation for languages like C/C++.
  • The server side provides inexpensive VMs that can be easily cloned/forked.
  • When you create a new project in the cloud, a new VM is given to you, and you can configure it as you like. There you install the deps for your project development, download the source code, etc.
  • The server side has an rich API and has knowledge of high level concepts like:
    • VMs that can be forked, and cloned, for creating new live sessions.
    • Compilation, which can be done quickly and distributedly in the cloud.
    • Execution, to be able to see and control remotely the execution and debugging of gui applications.
  • The client side could as well be web app like with cloud9 ide, or it could be a native standalone application with “cloud development” support (Kdevelop, Eclipse, whatever..)

A simple cloud computing use case:

An Amarok developer configures a cloud development vm for trunk. Then he forks it, and creates a git branch in which he starts working in a new feature for amarok. This feature might need new libraries, difficult to compile. He doesn’t know much about how to use this library and it’s giving him headaches: he asks in the irc channel of #lib, and gives a link to clouddevelopment:// and the developer of libx sees the problem in the system while the amarok developer sees live how it’s getting resolved.

Then he posts in his blog about his new feature, and some other amarok developers think it’s really cool and connect to the feature branch’s live session and improve a bit the code, effortesly.
As you can see, this would mean,  among other things:

  • No more “person1: it’s broken! person2: it works here”.
  • No more pastebin: just connect to the live session and see (and even fix) the problem yourself.
  • Configure the VM once, and then everyone can use it.
  • You don’t need to install all the deps in your system.
  • If your computer breaks, you don’t lose the data and configuration.
  • You can access to different development environments that might require different versions of the same libs from your own computer, no dependency hell guaranteed, and no extra local disk usage.
  • Compilation can be really fast and the speed of it would not depend on how powerful your computer is:  you can have preconfigured already in the VMs an icecream+ccache set of computers.
  • You could develop for KDE in Mac OS X within Linux, or Windows, or whatever you want.

Of course this is just an idea… and I don’t currently have time for making it happen, but… wouldn’t it be nice? I thought I had to share it =)

And you … Do you dare to predict what will happen in 2091?

An interesting article (in spanish) about how great minds of the time thought things will be in 2011 80 years ago finishes with an open question which is the title of this post: And you … Do you dare to predict what will happen in 2091?

I can’t resist to try, and here is why. The future is a sea of posibilities, and our mind is developed precisely to predict it. We are machines that are always classifying, archiving information and making relations between the data. Using this knowledge we can find out things about the world. Childs are bad understanding the speed of moving things, and that’s one of the reasons you should be careful letting a child cross the street. The other of course is that childs don’t understand that they need to focus on the important task of crossing the street, and even if they do, they don’t know what data should they be looking for and how to mine that data. With time and experience, we grow, and we can predict this kind of events much better and it also stops being something important.

Such a thing happens all the time. We learn something, and it becomes trivial, and suddently it’s boring stuff. But it’s much more difficult to learn anything if you didn’t experience it. That’s why learning history is so important. And even if you know history, you’ll probably not internalize it. Errors of the past will come over and over again by new people. And trial and error is often a very convenient and powerful way of learning with computers that everyone uses, but it’s not much so covenient in other matters.

Bottom line is, it’s very interesting to see how the predictions of these prominent people of the past failed. Sometimes reality surpassed their predictions, some others it came short. For example, Doctor Mayopredicted correctly that most infectius ailments would be minimized and controlled and most deaths would be due to heart issues, cancer, etc. He predicted 70 years of life expectancy in USA, and it’s now 77. However, these greats minds of the past predicted poverty will have ended, and that we would have an equitative wealth distribution, and it’s clear that it’s not the case.

When doing a prediction for the distant future, it’s difficult not fall into wishful thinking. Things changes faster than ever, and currently technology is indistinguishable from magic. Because of that, and that my field of expertise is that one related to Computers, that’s where I will make (most) predictions for: I have a better insight on it. However that will be more difficult for me, because as you know 80 in computers might be comparable to 5 centuries in other fields.

My predictions for 2091:

Privacy will be at the same time a very important legislated issue, and futile. Governments and private companies will have found unimaginable ways of cooperating for invading citiziens’ privacy, and still most people will think/find that it is for the good of them and cooperate with this task every single day at all hours. There will be some countries like Iceland that will have put a limit to this, but people will still be so dependent on technology that it won’t matter that much.

Most people won’t call themselves cyborgs, but they will be: they’ll have some kind of alien tecnology in their bodies to gather information and communicate, or perhaps they might wear fancy stylish hats that read their minds. Silent people walking on the street will be able to talk with each other using this technology.

Computers will have changed a lot, but then again not that much. We will still be using very similar devices as some that we use today: screens and keyboards – or at least computer programers will. Non-trivial software will still be created in a written computer language. Of course, trivial will have a different meaning in the future.

Cars will still be a very important means of transport. But they will be electric cars, in most cases public taxis in big important cities, and no person will drive them, they will drive themselves and thus roads will be much more secure than today.

Genetic advances will still be happening and feel like the field has just started to emerge. It will be like it was computer intelligence for us: it will feel like it never quite started, but it did indeed. We the people born before year ~2030 will be old and very ugy in the eyes of the new youth generation that will all be tall, in good health, beautiful and have perfect bodies. Life expectancy will have reached 100 years, so I might still be alive but I’ll probably have forgotten this post.

Presenting git timetracker

Damn, it’s been a while since last post, again. Anyway, I’m here to present you a new git tool I’ve been working on together with Daniel (danigm): the new git timetrack comand, part of the distributed time tracker project. It allows you to track the time spent in commits done to a git repository.

It’s quite simple to use: you do a git timetrack –start, the clock starts counting. Then you go for a coffee, you use git timetrack –stop for that, and then when you come back, you can continue counting the time executing git timetrack –start. Then you do a commit, and it gets automatically annotated with the time spent, and the clock stops counting. Some random notes:

  • To start using git timetrack, you need to first execute git timetrack –init in your project to add the needed hooks and options to .git/config.
  • The time annotations will appear in git log in seconds, and will be shown more nicely when using git timetrack –log.
  • If you forgot to start tracking the time for a commit, you can execute git timetrack –set X to set the clock to X minutes, and then continue counting the time with git timetrack –stop. And if you messed the time and want to start from zero, you can do a git timetrack –reset (or git timetrack –set 0).
  • You can add an estimation of the time you spent in last commit with git timetrack –set X and then “amending” the commit (adding the times-pent) with git timetrack –amend. Using that, you can also change the time spent in last commit, and you can also change/set the time spent in any commit giving the commit-ref to the –amend option.
  • For consulting how much time have you been currently spent in next commit (i.e. the status of the stopwatch), use git timetrack –current.
  • For consulting the time spent in the project, use git timetrack –summary . It internally makes use of git log, and allows the same options as git log, so git timetrack –summary – –since=1week dir/file.c would tell you the time spent in dir/file.c since last week.
  • If you’re going to start hacking and thus do a batch of commits, you might not want to execute git timetrack –start to start counting the time after each commit. That’s automatically done for you if you use the properly named option git timetrack –start-hacking!
  • List all the options available with git timetrack -h.

Git timetracker uses Git notes to add time-spent annotations to commits.  That means that thos annotations do not modify the commits and leave them intact, and are stored in a git notes branch instead. Git notes itself is quite recent and git timetracker  requires git-next (git development unstable branch) because it makes use of the git notes merge feature to be able to share the timetracker notes between users and merge changes on it nicely.

Why did we develop timetracker? We are creating a small software company called Wadobo, and we needed to do (rough) estimations of the time the development in the projects we work on will take, so we thought that the first thing we needed is having some real information about time spent in those projects.

I think that this can be really useful for the mentioned usecase. The tool is quite new but works-fine-for-me (TM) and we’ve been using it for a project for nearly two weeks already. Hopefully you like this little gem from us.

PS: I wanted to do a screencast, I really tried, but boy is that a difficult task to do. ffmpeg is segfaulting (I don’t use/have pulse audio but directly alsa). xvidcap is not capturing audio in the arch linux binary, and it’s diffcult to compile. recordmydesktop gives me audio artifacts. aur/screencast is not compiling… I gave up, have better things to do. What tools would you recommend to me?

GSoC wrapup – Konqueror new bookmarks system

So GSoC ended already, as most of you already know. I haven’t been blogging as much as I would like, and I didn’t achieve to finish on time everything I wanted, but that’s not a defeat, only a delay – I will continue working on this bookmarks system until it can get merged into trunk. And then I’ll fix incomintheg related bug reports =).

So the state of the art is: we’ve got an akonadi resource for konqueror bookmarks which stores the bookmarks in nepomuk. We’ve got a bookmarks organizer, and we’ve got a bookmarks menu integrated in konqueror. The new location bar is however not finished yet, but that will be fixed within days.

A lot has changed since last report. I’ll try to explain here what’s the base structure of the new classes and what I’ve been doing, mainly related to the location bar, and why I followed that design decisions.


The current konqueror location bar uses a KCompletion class for autocompletion, which is mainly handled by KLineEdit. But how does it work? Let’s use a simple example. Imagine you are developing kmail’s new email dialog. For the line in which the user enters the destination email address, you could use a KLineEdit, but in order to easy the task of the user, you could use autocompletion like this:

KLineEdit* destinationLineEdit = new KLineEdit(this);
KCompletion completion;

You could of course get the list of items to add to the completion object from the address list too ;-). If you were developing instead the location bar for dolphin, you would like to have directories listed while the user is typing. Instead of adding and removing items manually yourself to the completion each time the user types, you can use a KUrlCompletion object that does that automatically.

But for konqueror location bar we have a problem: it needs to be able to use kurlcompletion for completing directories, but it also needs to complete from bookmarks and history. We need to work with multiple completion objects at once even though KLineEdit and similar classes can work only with one. Also, we need to do more complex completion A normal KCompletion object contains a list of strings and matches what you type with those, but for having an amazing location bar we need more power: if I type “work” and I have a bookmark tagged “work”, I want it to show up in the completion list even if its URL doesn’t contain the word “work” at all. And that’s only the beginning. I want to be able to set an order of the completed items depending on the relevance and type of the items, and more..

Enter Qt Model-views

I need to confess that I love Qt model-views classes. QAbstractItemModel, QAbstractItemView, QAbstractProxyModel, QSortFilterProxyModel, QTreeView.. They provide an standard convinient and flexible way to manage and display almost any kind of collections. Even collections of completed items. Actually, the completed items popup is in reality a QListWidget, which displays a model associated with the completion object..

But that is an internal model which KLineEdit doesn’t let me change. So the question that follows is.. why not use directly my own custom item models instead of a KCompletion object? And that’s what I did, even if it required a lot of work to be done.

First of all, I tried to outline in paper the master plan. What classes needed to be created for everything to work fine. Then I wrote step by step what needed to be done before what. Then I started following those steps one by one and it worked!

Places all over the.. place

The plan was that the location bar autocompletes places. A place could be:

  • a history entry from history entries model
  • a bookmark from bookmarks model
  • an url from a kurlcompletion model

I had already a bookmarks model and konqueror already has an history entries model, but I had no kurlcompletion model. I looked at the code of the KUrlCompletion class I decided that I didn’t want to rewrite it.. so I simply created a KCompletionModel which acts as a proxy and converts a KCompletion object in a QAbstractItemModel. So to create a url completion model, I do:

KUrlCompletion* urlCompletion = new KUrlCompletion();
KCompletionModel* urlCompletionModel = new KCompletionModel();

To do a completion, I connect the urlCompletion to the textChanged(QString) signal from the lineedit. And the completion objects reflects the changes which in turn instantly appear in the model. It’s not the best solution but hey it works.

Also, another problem was that the completion object of KLineEdit would be replaced by a model, not three. But now as you see I have again multiple completion objects, so what was the solution? creating an aggregated model created out of multiple models. It works like this:

KAggregatedModel* aggregatedModel = new KAggregatedModel(this);

Now, those who know how an item model works will probably have a lot of questions about that. For example one could be.. Do all those models need to share the same columns? The answer is no. The aggregated model I created is quite simplistic in the way it works:

  • It assumes the source models have only one level of childrens.
  • It only shows one column, which is the default display column (
  • It shows the list of items as a list, showing first the items from the first item model, then the ones from the second, etc.

Places Manager

So far we have an aggregated model with completed urls, unfiltered bookmarks and unfiltered history entries. That’s yet not complete from being the amazing completer model. What we need to do is the amazing filtering and sorting. That’s done by the final completion model, the master of places..

It’s called PlacesProxyModel and inherits QSortFilterProxyModel. It takes a QAbstractIteModel and takes the Konqueror::PlaceUrlRole for each index, gets the URL, and obtains the corresponding Konqueror::Place for it. Then, knowing already all the available information for that place, tries to match it against the query the user entered in the lineedit and sets its relevance for sorting purposes. It also filters out duplicated entries. Quite an achievement, but how does all that work again?

First off, all previous models (url completion, bookmarks and history entries models) report their items to a Places Manager which keeps track of them and contains a Place for each relevant url. So for example, if the user has bookmark for and it’s been visited yesterday, there’s a place in the places manager with the information from both the bookmark and the history entry.

All those models also support obtaining the url related to each item by using the Konqueror::PlaceUrlRole. The aggregated model proxies calls to retrieve data from any role, including that. So in the end the information can be retrieved by the places proxy model. Then an algorithm that takes into account the number of visits, the last visit, if the user-written string matched the place’s url, title or tags, etc sets the relevance of the items in the proxy model.

The road to the location bar

The new location bar inherits from the modified KLineEdit which uses a QAbstractItemModel for completion, which I have named KLineEditView. An amazing location bar needs to be able to show an star that can be clicked to add/remove a bookmark, and the autocompletion needs to show for each place shown if it’s bookmarked, its tags, etc. Contextual information.

State of the art

What has been done? All the previous things I have mentioned are working. They can always improve, but the code is already there. The location bar widget is the last thing I started to write so it’s not finished: it has already plugins support so that new icons/sub-widgets can be shown inside the location bar. Now I need to write the plugin to let the user bookmark the current location, and I also need to write the CompletionPlaceDelegate to have a properly eye-candied completion list. And I’ll do it shortly.

Future and end

Unfortunately, a lot of things need to be done before this new bookmarks system ends up in konqueror trunk. Revamping such an integral part of Konqi is not a simple task. We also want to be sure that when the replacements comes into play the user doesn’t have to suffer it but to enjoy it instead, so I need to find fix and all the regressions. I need to write documentation and test cases too. I honestly don’t really know when the job will be done but I know I’ll continue working on it.

I want to thank specially my mentor David Faure for the support, for giving me a thumbs up even if I didn’t finish on time everything I wrote in my gsoc propossal. You rock david!

This is post has been too large I admit, so I think I’ll stop here :D. If you read everything yay you’ve got too much free time so go and do something more useful!

GSoC: I Am Alive! (And bookmarks too!)

First I want to apologise because I haven’t blogged fo a month about my gsoc. But that doesn’t mean that I haven’t been working, quite contrary. I’ve been having limited connectivity and the limited time I got access to Internet I didn’t feel like blogging. Now I do. Lot’s of things have happened. I’ve even suffered the H1N1 but don’t worry about that because it’s not as bad as TV shows: for me it was two days with flu and the third day I was 100% ok.

The konqueror bookmarks menu is already rewritten to support the new bookmarks system.  I like to read see other people’s code because it can inspire me when I want to develop something. Thus, I’ve been reading Arora code and I liked their idea of using QAbstractItemModel as the data source for QMenu and that’s how I’ve implemented the new bookmarks menu.

I’ve always wanted to have menus more advanced than just normal menus. When I saw that inside Mac OS X Help menu there’s a search bar I knew I wanted to have something like that in KDE and I wondered if that was possible. The answer is yes. Qt Menu system allows to insert custom widgets with WidgetAction class so I’ve added a Search Bar to Bookmarks Menu. When you type in the search bar, the source model of the Bookmark menu changes to a DescendantsPRoxyModel which represent all bookmarks and folders in a flat list and uses a QSortFilterProxyModel to filter out results. It’s not finished yet, there are some issues with the search but all in all I’m quite satisfied. Next step: Awesome bar, and fix more bugs =).

I’ve also been working on the bookmarks organizer, fixing bugs, adding support inline bookmark editing in the bookmarks view using double-click, adding a BreadCrumb view similar to Dolphin’s breadcrumb mode but using a QAbstractItemModel as a source for paths, etc. Here’s a small videocast where I show some of the mentioned features in action:

GSoC biweekly report++ – State of the bookmarks editor

Summers is already here for those who haven’t noticed – we’re having 44ºC here in Sevilla during the day and yesterday it was 36ºC at 23:30. That’s how Spain can be like in summer, so GCDS assistants you’ve been warned and don’t forget swim-wear!

First week of June I went to the AC/DC Madrid concert and it was awesome, but I didn’t program much that week. This week I’ve been working in the bookmarks editor, and now you can see that it’s getting more featureful. However I’d first like to share some comments about other bookmarks editors I’ve found over the net.

Safari 4 bookmarks editor with the iTunes-like cover view the most impressive:


I haven’t tested it, but it’s quite clear that the cover view eats a lot of space. They remove the space dedicated for showing the details of current bookmark, and instead (I guess) the bookmarks are editable “inline”.

Another interesting point worth mentioning is that in the left side they have two sections: Collections and Bookmarks. In the new Konqueror bookmark editor I’m using Akonadi, and Akonadi already uses the concept of collections internally so something like that will be easy to implement. Instead of showing in the left a Tree with a Root element and everything inside that (Bookmarks Toolbar, Bookmarks Menu, Recently Added Bookmarks, Unclassified bookmarks, etc) I could use a lateral panel similar to the one used in the Open file dialog with those items, and then show a breadcrumb widget over the bookmarks listing widget to let the user know (and manage) the current location:

bookmarks 15 june - open file

Okey now let’s see what I’ve done so far. You can add, remove bookmarks, and edit them (either using the line edits or even inline in the bookmarks view), and also I’ve borrowed some code from dolphin to make the bookmarks view columns resize nicely. And you can show whatever columns you choose, it’s up to you:

bookmarks 14 june

If you look closely, you see that the menus are pretty similar to the current bookmarks editor menu. I’ve just replaced the Bookmarks and Folder menus with an Organize menu. Taking a look at Firefox bookmarks organizer vs. current Konqueror bookmarks editor, I see that Konqueror has more options but even then they seem quite intuitive. Also something to wonder is how is FF bookmarks organizer missing toolbar buttons for the most common actions: New Bookmark, New Folder, Remove.

bookmarks 14 june - cols

Next week, I want to have all the already listed features in the bookmarks editor working (see for example the “Find in bookmarks”? it doesn’t work. Same for the breadcrumb which is just some text in a label, etc), and the following week I’ll hopefully have sorted out how to do the virtual folders structure and show it to the user in the bookmarks editor.