Archive for the 'KDE' Category

Akademy-triggered random thoughts about the future

I’m writing this text during a flight from Helsinki to Brussels on my way home from Akademy. I had a good and quite interesting time in there and I can say that I’m proud to be a member of the vibrant community called KDE.

When I was a child I wanted to be an inventor, and I was always thinking about new inventions, doing experiments and discovering things on my own. Then I discovered that with a single computer you can make happen any idea you have in your mind and convert it into reality, and that’s how at an early age I started to develop software.

KDE is all about ideas, about innovation, so I feel at home in here because people are open minded and like to think about future. I had lots of discussions about these topics: how is the future of KDE could or should be, where are we heading, what is the current situation and also brainstorming sessions about how far can technology go and how free software could anticipate this.

Currently and in the short-mid term, the biggest struggle of the KDE community and the free software in general are privative web services. Richard Stallman has warned us about this problem for a some time now, and I think it’s definitely true. This Akademy I have seen lots of people using web services like Google docs, Gmail for emails and chattting, facebook, linkedin, spotify, grooveshark, etc. I’m not pointing fingers here: this is just an increasing reality and I am part of those too.

We need to accept and agree that web services are here to stay and what we should do is to try to, first integrate them as much as possible in our ecosystem and to promote free software web services like owncloud, because it’s our only chance to remain relevant and not lose too much the field. We at Wadobo are ready to install it in our own server for example. The Free Software movement is already part of the history of Internet, and if we want to remain being part of it we need to adapt to the changes.

In a not so distant future, about three to five years, things will probably start to change, and I argue that we are going to see an invasion of Internet into reality. Yes, I’m thinking about immersive reality and gadgets like Google glasses. My friend and also Wadobo fellow Dani had a crazy brainstorming about this topic during Akademy and I think we ended up both convinced about the very broad possibilities that this new Internet bring to us.

It’s a really interesting exercise to think about the possibilities that devices like Google glasses bring, how they can enter into our life, and the implications and repercussions of all that. Basically the idea is that will have some a pair of glasses that also act as computer display, showing information over reality at all times. As simple and powerful as that.

We have seen that kind of stuff in science fiction: Dragon Ball Z, Terminator, and many more. The key thing is to make the glasses to correctly show contextual information on the go, transparently, pervasively when you actually need it and with having to do anything, just pointing your eyes/head which is something that you already do all the time in your life.

The main focus of this technology is to be a passive contextual interface. This is the business model of Google ads: they appear here and there in web pages showing related information to you. To achieve that the glasses will analyze the information sources it has (geolocalization via GPS/wireless signals, the camera pointing to what you see, other near devices, etc) to analyze the current situation in a rapid manner. Google is an information company, and if they succeed they will suddenly have so much more information about us that it will be quite scary:

    • They are already doing some steps in this direction. For example, the automatic subtitles service in Youtube could probably be applied in the future for real time translated subtitles and all people could understand each other no matter what language they use.
    • Also the advertisements business model will jump to reality: as an advertiser you could for example be able to superpose your ad about your shiny new car over old cars, or blank empty spaces. Talk about contextual and immersive advertisement.
    • The cards we use with our name and nickname on it in events like akademy won’t be needed any more: that information will be over-layered in your t shirt, or above your head. These kind of applications will probably appear in two ways: to be used in reduced contexts, and also as general purpose social networks with support for groups, etc. The next new facebook will definitely be born in here. This will take social networks to the next level. Parents will love to be able to as a glance always know where their children are, because their position will have appear to them superposed in the glasses as an indicator. This will probably also happen when you speak with someone, along with information about the call (time, cost, etc).
    • These glasses also will be regarded by ecologists as a very good invention, because you won’t need to buy any screen at all anymore. No need for screen at your workstation at work, another for your laptop, another for TV, no need to any kind of projector for presentations or at the cinema. It will be much more cost effective also. Big screen makers should be worried, their business model is going to change dramatically to the small ones and many companies will suffer this transition. Wearable computing will make our whole current computer experience change: TV, mobile phones, tablets, ebooks, workstations, laptops all will be affected. Maybe the people in the future will think about our current era as something clunky and unhealthy, where to use the computer you needed to use all these big and crazy devices, the same way we now think about cities of early 19th century as very unhealthy, filled with dirty things, smoke and non-efficient.
    • People will have their life recorded and this will bring webcams to whole new level. People will share their life on the net on live and they will have it stored completely. You will be able to attend to Akademy and do a guided tour in Tallinn through the glasses of David Faure for example. You will be able to go through a timeline of your whole life, zoom in and revive the experience you had.

This will also affect to transparency and behavior in public. Car accidents will usually be recorded from the point of view of different people and it will be much easier to judge what really happened. There will probably be a site where people will tag illegal behavior (robbery, bad parking in zebra crossings etc) and this of course will affect how people behave in public. You will also be able to easily report if traffic lights are not working and providing proof of it.

  • Services like Google street view will be also revamped: they will instantly be updated with the images of everyone’ glasses. The whole street view will be available everywhere, in 3d, and instantly updated. In the end this will allow you not only to see the world in the eyes of someone else, but to virtually move around any place in the world in real time.

 

And this is a small list of what currently our imagination let us reach. Some of the things written here might never happen, some will happen sooner than others, and many new original ideas did not enter the list. Most of these things won’t be possible with the first immersive reality glasses available either. But you can see now this that we are talking about is something new and interesting, which most probably will get it’s own name. For me, it’s the Realnet, because of how close is this network to reality.

Anuncios

Akademy-es 2012 – I go too!

Just a quick note, I’m about to take the plane from Seville to Zaragoza together with my collegues at Wadobo to attend to Akademy-es 2012

Yay! See you there folks.

PD: We’ll miss you this time Antonio Larrosa!

The Server in the middle problem and solution

At the beginning of 2009 I posted a story in this very same blog where I proposed the creation of a web encryption framework to add support for end to end encryption in the web browser.

As it turns out, in 2010 I did my Computer Science final project about this subject, successfully implementing an HTML extension in KHTML that fixed what during one talk I was giving on the subject I coined to be the Server in the middle problem (project memory in spanish, code in github).

The solution I developed is just a proof of concept and not a final proposition. It basically extends the div element and input type=text element by adding two attributes, encryption=”gpg” and encryption-key=”<keyid>”, so that a div containing a gpg ascii encoded encrypted text is automatically decrypted shown the plaintext, and when a form with an encrypted input type=”text” element is sent, the data is automatically encrypted.

The key point about this proposition is twofold:

  1. The security it provides is website-independent, you only need to trust the web browser.. It’s not possible using javascript, the DOM, CSS or any other tactics to access to the contents of the plaintext.
  2. It’s easy to implement as an HTML extension that  could be standardized.

This is what makes the proposal differ from other approaches like the deceased FirePGP Mozilla Firefox extension, the javascript API that some web browsers like Mozilla Firefox provide for cryptographic primitives, or javascript libraries like Slow AES that currently provide cryptographic support for some websites that uses them.

I have spent this weekend updating the code to work in recent KDE versions and creating an ready-to-go usb live opensuse-based [1] appliance called “Server in the middle” that shows Sweeetter, a microblogging application that allows users to exchange messages using the aforementioned HTML extension. Check it out! it’s for you to test it =)

SSL was the first step for securing the web. End to end encryption is the next, and it will allow cloud applications like web chats, web mails, and even office apps in the cloud with privacy being orthogonal to the servers used. When you chat using gmail with your peers using HTTPS, there’s someone else listening: it’s Google. It’s the server that is in the middle by design. When a Los Angeles employee sends an email to another peer using Gmail, it ain’t Google’s business.  Perhaps if an end-to-end encryption scheme as proposed was available as a standard, Google would offer it in Gmail for businesses and for geeks like us ;-), and certainly other service providers would.

I will say it again: what I proposed is just a proof of concept. We could perhaps encrypt whole forms in a similar manner, or  use a secure sandbox inside which plaintext data can be freely manipulated, but whose details are well known to the browser, are being recorded and the user can see in a details dialog, similar to the details of HTTPS connections in current web browsers, and the data going out from the sandbox is encrypted in a controlled and secure way.

The issue of privacy in the web will arise sooner or later. All the applications are jumping to the web wagon and some applications just need to be secure. It’s not only cryptonerds that need to take this seriously. Big companies that need their data to be secure will either continue using old-fashioned software, or request something better than we have now. This has already happened sooner than you might think (1999). Thus, we need to take this seriously, and begin standardizing and developing similar solutions to the one proposed for the shiny future that is about to come.

[1] BTW, SUSE Studio rocks!

QMLPart: enabling a declarative web

As some of you already know, I started last year with some university friend Wadobo, a spanish-based free software startup. Lately we’ve been working with QML and now that I know it, it’s really nice. Some weeks ago I was with danigm (also from Wadobo) and we were talking about how nice it is to do mobile applications in QML, and how web applications are taking over desktop applications more and more. Do you know GTK already has an HTML/JS frontend? Demo here. BTW, Gnome guys have been working on something similar to QML too, ClutterScript.

And then the idea came out of the blue. QML could be (in my dreams 😉 the successor of HTML. I know, sounds crazy doesn’t it? That’s because it is crazy. So anyway, I wanted to test the concept, and today I coded this litle KPart called QmlPart that simply loads a .qml file into the KPart. The code is really minimal and it works beautifully, both in Konqueror and in rekonq, and loads fine both local and remote QML files and remote files, even if the QML application is divided in multiple files.

I am really surprised how easy it was to do this KPart. I uploaded to wadobo’s web server the Qt Declarative examples and most of them work out of the box with QmlPart. Here is a screencast:

Why would we want to make the web declarative? Because QML is much more powerful than HTML/CSS. Those of you that know QML will probably agree. HTML doesn’t even have a proper way to create column or row layouts. All kind of animations, transitions, states are easy-peasy in QML. And its object orientation is quite powerful, as they have properties, inheritance, etc. The web of 2011 can be much easily done in QML than in current web technologies.

What would be the path of action? This is just a proof of concept, and it might end here. Or it might not. If we get serious, we would first try to agree in a standard for a declarative web (talk to Gnome/ClutterScript guys), work in including this standard in web browsers, create a library in Javascript that allows loading declarative files as HTML/JS as a fallback for browsers without native declarative web support, and that way we can attract people to the platform, and then we would have something.

Either way, I had my fun today with this little kpart and all those crazy ideas around it. Mission accomplished.

Cloud development

As always, it’s been a while since I wrote something here. Anyway, I’m going to present here a new concept:  cloud development. First of all I must say: the people of Cloud9 IDE (Mozilla bespin/skywriter has been merged into that) are already working on something similar to my idea, but not quite the same.

The idea behind Cloud9 IDE is being able to develop javascript collaboratively  using only the web browser. It even supports debugging, but that’s because the browser actually can run javascript code itself. They plan to support syntax highlighting for more languages. They of course have a chat for those connected to the ide, and changes occur live to the code for everyone connected at the same time.

Gobby/Kobby are native (gnome, kde) collaborative editors which use the infinote protocol for collaborative editing. So you connect to a infinote server, and then you can see the files in that session, edit them, add/remove files, and chat with your colleagues.

None of those solutions provide real general purpose “cloud development” for me.  These are the features I¡m looking for:

  • The server side is a bunch of machines in a cloud configuration which provide an API.
  • The server side provides compilation for languages like C/C++.
  • The server side provides inexpensive VMs that can be easily cloned/forked.
  • When you create a new project in the cloud, a new VM is given to you, and you can configure it as you like. There you install the deps for your project development, download the source code, etc.
  • The server side has an rich API and has knowledge of high level concepts like:
    • VMs that can be forked, and cloned, for creating new live sessions.
    • Compilation, which can be done quickly and distributedly in the cloud.
    • Execution, to be able to see and control remotely the execution and debugging of gui applications.
  • The client side could as well be web app like with cloud9 ide, or it could be a native standalone application with “cloud development” support (Kdevelop, Eclipse, whatever..)

A simple cloud computing use case:

An Amarok developer configures a cloud development vm for trunk. Then he forks it, and creates a git branch in which he starts working in a new feature for amarok. This feature might need new libraries, difficult to compile. He doesn’t know much about how to use this library and it’s giving him headaches: he asks in the irc channel of #lib, and gives a link to clouddevelopment://kde.org/amarok/livesessions/featurex and the developer of libx sees the problem in the system while the amarok developer sees live how it’s getting resolved.

Then he posts in his blog about his new feature, and some other amarok developers think it’s really cool and connect to the feature branch’s live session and improve a bit the code, effortesly.
As you can see, this would mean,  among other things:

  • No more “person1: it’s broken! person2: it works here”.
  • No more pastebin: just connect to the live session and see (and even fix) the problem yourself.
  • Configure the VM once, and then everyone can use it.
  • You don’t need to install all the deps in your system.
  • If your computer breaks, you don’t lose the data and configuration.
  • You can access to different development environments that might require different versions of the same libs from your own computer, no dependency hell guaranteed, and no extra local disk usage.
  • Compilation can be really fast and the speed of it would not depend on how powerful your computer is:  you can have preconfigured already in the VMs an icecream+ccache set of computers.
  • You could develop for KDE in Mac OS X within Linux, or Windows, or whatever you want.

Of course this is just an idea… and I don’t currently have time for making it happen, but… wouldn’t it be nice? I thought I had to share it =)

And you … Do you dare to predict what will happen in 2091?

An interesting article (in spanish) about how great minds of the time thought things will be in 2011 80 years ago finishes with an open question which is the title of this post: And you … Do you dare to predict what will happen in 2091?

I can’t resist to try, and here is why. The future is a sea of posibilities, and our mind is developed precisely to predict it. We are machines that are always classifying, archiving information and making relations between the data. Using this knowledge we can find out things about the world. Childs are bad understanding the speed of moving things, and that’s one of the reasons you should be careful letting a child cross the street. The other of course is that childs don’t understand that they need to focus on the important task of crossing the street, and even if they do, they don’t know what data should they be looking for and how to mine that data. With time and experience, we grow, and we can predict this kind of events much better and it also stops being something important.

Such a thing happens all the time. We learn something, and it becomes trivial, and suddently it’s boring stuff. But it’s much more difficult to learn anything if you didn’t experience it. That’s why learning history is so important. And even if you know history, you’ll probably not internalize it. Errors of the past will come over and over again by new people. And trial and error is often a very convenient and powerful way of learning with computers that everyone uses, but it’s not much so covenient in other matters.

Bottom line is, it’s very interesting to see how the predictions of these prominent people of the past failed. Sometimes reality surpassed their predictions, some others it came short. For example, Doctor Mayopredicted correctly that most infectius ailments would be minimized and controlled and most deaths would be due to heart issues, cancer, etc. He predicted 70 years of life expectancy in USA, and it’s now 77. However, these greats minds of the past predicted poverty will have ended, and that we would have an equitative wealth distribution, and it’s clear that it’s not the case.

When doing a prediction for the distant future, it’s difficult not fall into wishful thinking. Things changes faster than ever, and currently technology is indistinguishable from magic. Because of that, and that my field of expertise is that one related to Computers, that’s where I will make (most) predictions for: I have a better insight on it. However that will be more difficult for me, because as you know 80 in computers might be comparable to 5 centuries in other fields.

My predictions for 2091:

Privacy will be at the same time a very important legislated issue, and futile. Governments and private companies will have found unimaginable ways of cooperating for invading citiziens’ privacy, and still most people will think/find that it is for the good of them and cooperate with this task every single day at all hours. There will be some countries like Iceland that will have put a limit to this, but people will still be so dependent on technology that it won’t matter that much.

Most people won’t call themselves cyborgs, but they will be: they’ll have some kind of alien tecnology in their bodies to gather information and communicate, or perhaps they might wear fancy stylish hats that read their minds. Silent people walking on the street will be able to talk with each other using this technology.

Computers will have changed a lot, but then again not that much. We will still be using very similar devices as some that we use today: screens and keyboards – or at least computer programers will. Non-trivial software will still be created in a written computer language. Of course, trivial will have a different meaning in the future.

Cars will still be a very important means of transport. But they will be electric cars, in most cases public taxis in big important cities, and no person will drive them, they will drive themselves and thus roads will be much more secure than today.

Genetic advances will still be happening and feel like the field has just started to emerge. It will be like it was computer intelligence for us: it will feel like it never quite started, but it did indeed. We the people born before year ~2030 will be old and very ugy in the eyes of the new youth generation that will all be tall, in good health, beautiful and have perfect bodies. Life expectancy will have reached 100 years, so I might still be alive but I’ll probably have forgotten this post.

Presenting git timetracker

Damn, it’s been a while since last post, again. Anyway, I’m here to present you a new git tool I’ve been working on together with Daniel (danigm): the new git timetrack comand, part of the distributed time tracker project. It allows you to track the time spent in commits done to a git repository.

It’s quite simple to use: you do a git timetrack –start, the clock starts counting. Then you go for a coffee, you use git timetrack –stop for that, and then when you come back, you can continue counting the time executing git timetrack –start. Then you do a commit, and it gets automatically annotated with the time spent, and the clock stops counting. Some random notes:

  • To start using git timetrack, you need to first execute git timetrack –init in your project to add the needed hooks and options to .git/config.
  • The time annotations will appear in git log in seconds, and will be shown more nicely when using git timetrack –log.
  • If you forgot to start tracking the time for a commit, you can execute git timetrack –set X to set the clock to X minutes, and then continue counting the time with git timetrack –stop. And if you messed the time and want to start from zero, you can do a git timetrack –reset (or git timetrack –set 0).
  • You can add an estimation of the time you spent in last commit with git timetrack –set X and then “amending” the commit (adding the times-pent) with git timetrack –amend. Using that, you can also change the time spent in last commit, and you can also change/set the time spent in any commit giving the commit-ref to the –amend option.
  • For consulting how much time have you been currently spent in next commit (i.e. the status of the stopwatch), use git timetrack –current.
  • For consulting the time spent in the project, use git timetrack –summary . It internally makes use of git log, and allows the same options as git log, so git timetrack –summary –author=foo@server.com –since=1week dir/file.c would tell you the time foo@server.com spent in dir/file.c since last week.
  • If you’re going to start hacking and thus do a batch of commits, you might not want to execute git timetrack –start to start counting the time after each commit. That’s automatically done for you if you use the properly named option git timetrack –start-hacking!
  • List all the options available with git timetrack -h.

Git timetracker uses Git notes to add time-spent annotations to commits.  That means that thos annotations do not modify the commits and leave them intact, and are stored in a git notes branch instead. Git notes itself is quite recent and git timetracker  requires git-next (git development unstable branch) because it makes use of the git notes merge feature to be able to share the timetracker notes between users and merge changes on it nicely.

Why did we develop timetracker? We are creating a small software company called Wadobo, and we needed to do (rough) estimations of the time the development in the projects we work on will take, so we thought that the first thing we needed is having some real information about time spent in those projects.

I think that this can be really useful for the mentioned usecase. The tool is quite new but works-fine-for-me (TM) and we’ve been using it for a project for nearly two weeks already. Hopefully you like this little gem from us.

PS: I wanted to do a screencast, I really tried, but boy is that a difficult task to do. ffmpeg is segfaulting (I don’t use/have pulse audio but directly alsa). xvidcap is not capturing audio in the arch linux binary, and it’s diffcult to compile. recordmydesktop gives me audio artifacts. aur/screencast is not compiling… I gave up, have better things to do. What tools would you recommend to me?