Setup Jenkins on TomCat/Windows with Mercurial Repository in BitBucket over SSH

This is just the list of steps that I’ve made to get a Windows build machine execute a Jenkins Job (running inside TomCat) clone a Mercurial repository hosted on BitBucket with SSH authentication.

  • Install TomCat (I’ve used the 7)
  • Install Jenkins on TomCat
  • Create a Windows User for executing the TomCat (Lets reference it as CI_USER, but you can name it as you like)
  • Set the TomCat process to execute with the CI_USER
  • Log in on the machine with the CI_USER
    • Download putty zip so that you can create a SSH key.
      • Generate a key using “PUTTYGEN.EXE”
      • Copy the Public-key and add it as a Deployment keys on your Repository settings on BitBucket.
      • Save the private key without a passphrase.
    • Install TortoiseHG on the Jenkins Machine.
    • Create a user Mercurial settings file on C:\Documents and Settings\CI_USER\Mercurial.ini (Look here for references)
    • Add this code to the file

      [ui]
      ssh=”C:\PATH_TO_TORTOISE_HG_INSTALL_ROOT\TortoisePlink.exe” -batch -i “C:\PATH_TO_PRIVATE_KEY_FILE_CREATED_WITH_PUTTYGEN.ppk”

    • I’m not really sure if this step is really mandatory, but do it just in case:
      • Manually clone your repo using TortoiseHG. Make sure you use the ssh url from bitbucket as source and some temporary folder as destination.
      • Accept Bitbucket certificate and add it to the cache (Click yes if any window appears regarding that)
    • Install the Mercurial plugin on Jenkins.
    • Go to jenkins and configure your job to point to the ssh url
    • Happy CI

Debug a Google Chrome Developer Tools Panel Extension

A Google Chrome Developer Tools Panel Extension is (as stated in the documentation):

a way to integrate your extension into Developer Tools window UI: create your own panels, access existing panels, and add sidebars…

Its an awesome way to increment a Chrome user experience while he is viewing another page in the main content area. For those who does not know, Chrome extensions are plain html + js + css files. They are really easy to implement.

The problem: How to debug a Google Chrome Developer Tools Panel Extension?

One of the main difficulties I’ve found when starting to developing them, was debugging. I could not find an easy way to see the errors they were giving, and I could not understand how to use Developer Tools itself to inspect the elements, check the javascript code, etc.

When you create a extension, you have a manifest.json file that contains information about it. Here is an example:

{
  "name": "Sample",
  "version": "0.1.0",
  "description": "Sample DevTools Extension",
  "devtools_page": "devtools.html",
  "manifest_version": 2
}

The devtools.html usually looks like this:

<html>
<body>
<script src="devtools.js"></script>
</body>
</html>

The devtools.js is where the magic happens. Here you usually call the chrome api methods to register your objects, panels, etcs:

// This will register the panel. 
// NOTE: The second argument is required, so you need a image file to be the icon of your panel otherwise it just won't work (it did not for me at least)
chrome.devtools.panels.create("Sample Panel",
                              "Icon.png",
                              "panel.html",
                              function(panel) { 
                              });

The panel.html is the page that will appear on the Developer tools panel. Here is an example:

<html>
<body>
    <h1>Hello from a panel!</h1>
</body>
</html>

Here is the final result:

The solution: Add the panel html page as an options page

Just add the panel html page as the options page of your extension, changing the manifest.json file like this:

{
  "name": "Sample",
  "version": "0.1.0",
  "description": "Sample DevTools Extension",
  "devtools_page": "devtools.html",
  "options_page": "panel.html",
  "manifest_version": 2
}

With that, you will be able to open your Panel page like an options page. Clicking on the options link of your extension on the “Extensions” tab:

By clicking the link, chrome will open the page as a normal html page, giving you access of the Developer Tools itself to debug your extension:

Happy debugging (and coding)! :D

Easier automated builds and continuous integration with Naven: A .NET port of Maven

I love Apache Maven. I really do. It increased my productivity and quality on java projects so much! I stopped worring about the small and repetitive details about my build process.

But here in 4N1, we have a lot of projects in .NET, and we could not find a good, easy and free tool like Maven for the .NET world, so we decided to start our own.

Naven comes to the rescue!!!

Naven is a port of the Maven concept, meaning that you don’t need to specify a build process, you just need to inform how your application is, and Naven will “create” the build process for you. Implemented as a command line application (but with the real logic in a C# 4.0 API), Naven can be used as a build automation tool and executed inside a continuous integration server, in fact, here in our CI server(Jenkins) we use it already.

We also did Naven to help spread build automation and CI concepts, making it easier for someone to automate their build process.

Apache Maven has a POM (Project Object Model) to define the caracteristics of your projects. Since the .NET world already have a POM (the *.csproj and such files), Naven only needs a AOM (Application Object Model). The AOM is a file that contains the basic information about your application.

Components

Basically, a Naven application has components. The currently supported are:

  • Solution: References an existing .Net solution.
  • Project: References an existing .Net project.

Phases

In the Naven world, a build process is divided in phases. Phases are nothing more that steps of a build process. The currently supported are:

  • Initialize: Deserializes the AOM file and initialize all necessary objects.
  • Validate: Validates the AOM file looking for errors (EX: pointing to an invalid solution, a non existent project file, etc)
  • Resolve Dependencies: Since version 0.2.0, Naven integrates with Nuget, meaning that if you have projects that specify their dependencies in a Nuget file (package.config) it will install the packages for you.
  • Compile: Using MSBuild, executes the compilation of the components of your application.
  • Tests: Using NUnit, Executes the automated tests of your application.

Besides the “out-of-the-box” phases, you can extend the Naven build process using NAnt, on the “custom phases”. They happen before and after their respective phases:

  • AfterInit
  • BeforeValidate
  • AfterValidate
  • BeforeResolve
  • AfterResolve
  • BeforeCompile
  • AfterCompile
  • BeforeTest
  • AfterTest

Goals

A Goal is something that you want Naven to do with that component. The currently supported are:

  • Resolve: Tells Naven to resolve the dependencies of the component. In the current version you don’t need to specify this Goal because it will process all “packages.config” files found.
  • Compile: Tells Naven that the component should be compiled.
  • Test: Tells Naven that the component have tests that should be executed.

NOTE: In the current version you need to manually specify the Goals, but we are planning to apply the concept of Convention over Configuration, and try to figure it out what are the expected Goals for a particular component, for example, adding the Test goal automatically for all projects with the name ending with “.Tests”.

Here is an example of a concrete Naven AOM file:

<?xml version="1.0" encoding="utf-8"?>
<DotNetApplication xmlns="http://naven.4n1.pt/aom" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://naven.4n1.pt/aom http://naven.4n1.pt/0.3/aom.xsd">

  <!-- Application name -->
  <Name>Sample App Name</Name>

  <!-- Application description -->
  <Description>This is just a sample application.</Description>

  <!-- Path where Nuget packages will be installed -->
  <NugetPackagesFolder>packages</NugetPackagesFolder>

  <Components>

    <!-- Represents an existing solution -->
    <Solution Path="Solution.sln">
      <Goals>

        <!-- Tells naven to compile this solution. Normally this would be the solution that contains all the projects of your application. Here in <a href="http://www.4N1.pt">4N1</a>, we normally have a "main" solution, will all the projects. Its easier and faster to compile a lot of related projects. -->
        <Compile/>
      </Goals>
    </Solution>

    <!-- Represents an existing project -->
    <Project Path="SampleProject.Tests\SampleProject.Tests.csproj">
      <Goals>

        <!-- Tells naven to run the tests on this project. Naven will look on the .csproj file to find out
	where the compiled assembly is.-->
        <Test/>
      </Goals>
    </Project>

  </Components>
  <Phases>
    <AfterInit>
      <!-- Inside a "custom phase", you can use all NAnt tasks (as long as you include their assembly in the same folder Naven executable) -->
      <echo message="This is happening after Init"/>
      <echo message="This is also happening after Init"/>
      <property name="Environment" value="SomeValue"/>
      <echo message="This is the environment: ${Environment}"/>
    </AfterInit>
  </Phases>
</DotNetApplication>

How to use it?

Just grab the latest release(v 0.3.0) on the Naven repo, unzip it, create your Naven AOM file and execute the command line, like this:

  • nvn.exe PATH_TO_THE_AOM_FILE Resolve
  • – To resolve the dependencies of your application.
    or

  • nvn.exe PATH_TO_THE_AOM_FILE Compile
  • – To compile your application.
    or

  • nvn.exe PATH_TO_THE_AOM_FILE Test
  • – To compile and test your application.

Properties Support

When extending Naven native build process with NAnt, you may need to supply additional properties values when invoking Naven by command line. Just like NAnt /D:PropertyName=PropertyValue switch, you have a similar mechanism in Naven:

  • nvn PATH_TO_FILE GOAL /p=PropertyOne:ValueOne /p=PropertyTwo:ValueTwo

On the “custom phases” implementation, you reference those defined properties just like normal NAnt properties:

<?xml version="1.0" encoding="utf-8"?>
<DotNetApplication xmlns="http://naven.4n1.pt/aom" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://naven.4n1.pt/aom http://naven.4n1.pt/0.3/aom.xsd">
  <Name>NAnt Support Application.</Name>
  <Description>An application that shows the NAnt support.</Description>
  <Phases>
    <AfterInit>
      <echo message="This are properties references: ${PropertyOne}, ${PropertyTwo}"/>
    </AfterInit>
  </Phases>
</DotNetApplication>

What are the next planned features?

Naven has a lot way to go to get even just a little bit closer to the great tool Maven is. With that in mind, we have planned a few features already.

  • Convention support: By parsing the main solution file or searching for project files below the base folder.
  • Support other agents: An agent (for now) may be a DependencyResolver, a Compiler or a Tester. Naven code is made with dependency injection concepts, so we could change the default agents used. The ones available now are only:
    • MSBuild for compilation
    • Nuget for dependency resolution
    • NUnit for testing
  • Additional phases: We plan also to extend the Naven build process, to include some additional phases. Some of them may be:
    • Package: Creates a package of your projects/applications in Nuget format.
    • Install: Install the package in a package manager, probably a Nuget server for now.
    • Deploy: Deploy the application/project on a server.

For all this, we really would like to receive some help or opinions, specially because Naven is Open Source, and also because we hope it make your life easir. :D Feel free to check it out, download it, use it, and contribute. The 4N1 team thanks you.

Creative Programming

I never thought I was a creative person. It was more natural for me to think logically and rationally. Until recently I (wrongly) divided those two areas: creativity and logic.

I began my journey into computers and programming almost 15 years ago. Recently, I had the opportunity to work with an amazing and creative human being, Ana Teixeira. Once, while developing an idea, she sad to me:

“You are a really creative guy!”

I was shocked, I never thought someone would think I’m creative. At least not someone with sanity and who was indeed creative.

Some days ago, Ron Reis suggested in the SILO discussion group an amazing post here written by Tara Sophia Mohr.

It talks about the three personalities, or inner voices, a “creator” has and how troublesome can it be to bring one to a particular situation. Quoting Tara’s post:

The Artist: The artist’s domain is drafting, receiving ideas and inspiration, fleshing them out, etc.
The Editor: The editor’s domain is revising, trimming, structuring, etc.
The Agent: The agent’s domain is developing marketing messages for the work, communicating about the work to external stakeholders, and finding distribution, etc.

For more details, check the full article, its amazing.

I saw myself having those three, even being a programmer. As an artist I always think about the correct, beautifull and elegant implementation of a code/problem. As an editor I have to rethink about the possible problems and correct (read it contain) my inner artist. As an agent (I confess he only presents himself a few times), I imagine how everything will impact the client, if it will sell, if it will be profitable, etc. This made me think that programming is a kind of creativity.

Like the “creators”, we:

  • Have problems to solve (sometimes with an abstract definition of it)
  • Inspiration moments
  • Try out existing or invent new methods
  • We defend, revise, improve and give up on our ideas
  • We try to sell ideas/concepts
  • We think about the impact things will have (even if just a little)
  • And many more

Tara’s post gave me awareness of this “triad” and how to begin to understand and improve them, because we all need to have the three. Does not need to be in equal percentages, but they all must exist. And there will be always a situation when one of them is more suited. She also gives an advice to make it easier to switch between them. Try to associate each personality with a color and a song. That is so cool! When you need to impersonate one of them you can think of the color or the song you choose for them. Awesome.

I only choosed the colors of mine so far:

Artist: Blue
Because the first thing that comes in mind when I think of art is the Ocean. The movement of the waves, the sound of it.

Editor: Black
Because black is raw, it does not change. Its pure, its logic (at least for me, heheh)

Agent: Red
The passion, the hate, the good, the bad, the wow, the yeah.

I will never think that I’m not a creative person anymore.

What are your colors? What are your songs? Feel free to comment about them :)

Syncronizing time/watch between javascript client and ASP.NET MVC 4 server

I just wished working with Date() in Javascript was easier.
Recently here at 4N1, we had to make some date-time calculations on the client-side of a web application. The app arquitecture is:

Client side

  • Html 5: Just added the 5 to look cool. :D
  • Javascript: You know what that is.
  • Knockout JS: An amazing Javascript MVVM framework.
  • TaffyDB: An awesome Javascript database.

Server side

Since the client and the server time can be different, we need a way to know exactly how much it is to adjust our calculation. Talking with an old colleague, Rui Milagaia, the following solution came up:

Time sync Logic

A device is anything that has a time. This logic can be applied in any scenario(computer A to computer B, client to server, server to server, device A to device B, etc).

NOTE: This solution will not guarantee milliseconds precision but if your doing calculations that go only down to the second, it should be ok.

Back to our app, since is javascript, I’ve used jquery to perform an ajax request to an MVC Action. I just needed to guarantee that the request was synchronous, otherwise the asynchronous behavior adds noise to the calculated value. I’ve created a github repository with the source code here. You can use it with another server side technologies (JSP, Rails, Grails, etc). Here is the important steps:

Client side

  1. Add a reference to the 4n1.timesync.js file:
    <!-- jQuery 1.5 or later is a foranyone.timeSync dependency -->
    <script src="jquery-1.6.4.js" type="text/javascript"></script>
    
    <!-- Reference to the 4n1.timeSync script file-->
    <script src="4n1.timeSync.js" type="text/javascript"></script>
    
    <!-- To make it easier to parse date strings with the ISO 8601 format, we use the library from https://github.com/csnover/js-iso8601 -->
    <script src="iso8601.js" type="text/javascript"></script>
    
  2. Set the expected properties, in my case only the url to the MVC Action, and call the foranyone.timeSync.getTimeDifference function to retrieve the time difference in milliseconds.

    foranyone.timeSync.url = "www.somedomain.com/ServerTime/GetServerTime";
    
    // This variable now has the difference from the client and the server in milliseconds. It can be positive or negative.
    var timeDifference = foranyone.timeSync.getTimeDifference();
    
  3. Server side

    1. Just implement an action that returns the server time:
      public class ServerTimeController
      {
          public ActionResult GetServerTime()
          {
              // Using a Iso format string without all the milliseconds because IE cannot process it :p. 
              return Json(DateTime.Now.ToString("yyyy'-'MM'-'ddTHH':'mm':'ss.fff%K"), JsonRequestBehavior.AllowGet);
          }
      }
      

    And thats it. Happy syncronization! :D

Kockout performance tip #1

After reading two posts(#1 and #2) from one of the Knockout project members, Ryan Niemeyer, I did some performance testing on my application and found an improvement:

The Problem: Rendering and processing elements that are not visible

<table data-bind="foreach: Items">
	<tr>
		<td data-bind="text: Name"></td>
		<td>
			<input type="button" data-bind="click: edit" value="Edit"/>
		</td>
	</tr>
	<tr data-bind="visible: IsEditing()">
		<td colspan="4" style="background: white;">
			<table data-bind="foreach: SubItems">
				<tr>
					<td data-bind="text: Name"></td>
				</tr>
			</table>
		</td>
	</tr>
</table>

This code creates a table with two rows for each Master Item (in the Items list):

  • A row with the Item name and an Edit button
  • Another row with an inner table with the list of the child items (in the SubItems list).

The thing is, the second row only appears when the user clicks on the Edit button (callind the Item.edit method, witch in turn changes the IsEditing to true). With the code above, Knockout was processing all bindings (even the ones from the SubItems). Depending on the amount of SubItems or bindings, the ko.applyBindings method can taking quite some time.

The Solution: Using an If binding to control witch elements are rendered and processed

<table data-bind="foreach: Items">
	<tr>
		<td data-bind="text: Name"></td>
		<td>
			<input type="button" data-bind="click: edit" value="Edit"/>
		</td>
	</tr>
	<tr data-bind="if: IsEditing()">
		<td colspan="4" style="background: white;">
			<table data-bind="foreach: SubItems">
				<tr>
					<td data-bind="text: Name"></td>
				</tr>
			</table>
		</td>
	</tr>
</table>

Changing the visible binding to the if binding, made Knockout only process and render the details row (with all the SubItems) when the Master Item was being edited. In my page (a lot more complex than the example above and with a lot of data), it made the ko.applyBindings execution 2 seconds faster.

ASP.NET MVC4/Upshot/Knockout: Remote + Local DataSource

The SPA (Single Page Application) I’m creating, does not sync all the changes with the server using the upshot RemoteDataSource. Because of that, I had to use a mixed approach, that is, a RemoteDataSource plus a LocalDataSource. The remote one brings the data from the server, while the local is used to make changes that will update the User interface, without going to the server. Here is the code:

// Used to "construct" new Model instances
function Model(data) {
    
	var self = this;

	// add properties from the JSON data result
	upshot.map(data, "SomeEntity:#SomeNamespace", self);

	// add properties managed by upshot
	upshot.addEntityProperties(self);
};

function ViewModel() {

	self = this;

	// Code generated by the Server-side @(Html.UpshotContext(true).DataSource<SomeNamespace.EntitiestController>(x => x.GetEntities()))
	upshot.metadata({ "SomeEntity:#SomeNamespace": { "key": ["Id"], "fields": { "Id": { "type": "String:#System" } }, "rules": {}, "messages": {}} });

	upshot.dataSources.Items = upshot.RemoteDataSource({
		providerParameters: { url: "/api/Entities?action=", operationName: "GetEntities" },
		entityType: "SomeEntity:#SomeNamespace",
		bufferChanges: true,
		dataContext: undefined,
		mapping: {}
	});

	// Creates a reference to the items returned by the remote datasource
	self.dataSource = upshot.dataSources.Items.refresh();

	// Create a local datasource that references the items returned by the remote datasource
	self.localDataSource = upshot.LocalDataSource({ source: self.dataSource,
		autoRefresh: true, allowRefreshWithEdits: true
	});	

        // Creates a filter to hide the deleted items (Since we are not syncing the changes, they will keep appearing on the UI otherwise)
        self.localDatasource.setFilter({ property: 'IsDeleted', value: false, operator: '==' });

	// Creates a reference to the local datasource entities to use it on the UI binds
	self.items = self.localDataSource.getEntities();

	self.addNew = function () {

		// Create a new item
		var item = new Model(null);

		// Pushes the item to the list of items. This will update the UI
		self.items.push(item);
	}

	self.remove = function (item) {
    
		// Removes the item from the localDatasource
		self.localDataSource.deleteEntity(item);

		// To update the UI, you have to refresh the local datasource (don't know why it does not do automatically)
		self.localDataSource.refresh();
	}
}

The autoRefresh property is self-explanatory. The important one here is the allowRefreshWithEdits property, that will allow us to execute refresh’s on the local datasource while having changes (it triggers an error if you don’t do this).

Another important thing is to add a filter to hide all the deleted entities at the local datasource. Since they are not commited to the server, they will still appear if you delete them and don’t apply a filter

Now, the user interface just have to bind to the items from the localDataSource, not the remote one:

<script type="text/javascript">
    $(function () {

        // Creates a ViewModel instance
        viewModel = new ViewModel();

        ko.applyBindings(viewModel);
    }
</script>
<div data-bind="foreach: items">
	<!-- Do something to show the items -->
<div>

ASP.NET MVC 4 ApiController serialization error: No readonly properties serialized and System.Runtime.Serialization.InvalidDataContractException

I’m starting a new project using the ASP.NET MVC 4 runtime. I’m using the new ApiController feature, really cool indeed. I had a business entity (domain class) that I wished to expose. But since the ApiController default serializer don’t write readonly properties, the best answer I found on the net was to create a ViewModel wrapping an instance of my business entity exposing the properties I wanted. Check the code:

public class DomainEntityApiController : ApiController
{
	// GET /api/domainentityapi
	public IEnumerable<DomainEntityViewModel> Get()
	{
		List<DomainEntityViewModel> list = new List<DomainEntityViewModel>();

		for (int i = 0; i < 10; i++)
		{
			list.Add(new DomainEntityViewModel(new DomainEntity(i + "Id", i + "Name")));
		}

		return list;
	}
}

public class DomainEntityViewModel
{
	private DomainEntity _entity;

	public DomainEntityViewModel(DomainEntity entity)
	{
		_entity = entity;
	}

	public String Id
	{
		get { return _entity.Id; }
		set 
		{ 
			// Do nothing 
		}
	}

	public String Name
	{
		get { return _entity.Name; }
		set
		{
			// Do nothing 
		}
	}
}

public class DomainEntity
{
	private String _id;

	private String _name;

	public DomainEntity(String id, String name)
	{
		this._id = id;
		this._name = name;
	}

	public String Id
	{
		get { return _id; }
	}

	public String Name
	{
		get { return _name; }
	}
}

Compiled. Went to http://localhost/api/DomainEntityApi and BANG! Error:

Type ‘TakeThatTime.Web.Models.DomainEntityViewModel’ cannot be serialized. Consider marking it with the DataContractAttribute attribute, and marking all of its members you want serialized with the DataMemberAttribute attribute.  If the type is a collection, consider marking it with the CollectionDataContractAttribute.  See the Microsoft .NET Framework documentation for other supported types.

body {font-family:”Verdana”;font-weight:normal;font-size: .7em;color:black;}
p {font-family:”Verdana”;font-weight:normal;color:black;margin-top: -5px}
b {font-family:”Verdana”;font-weight:bold;color:black;margin-top: -5px}
H1 { font-family:”Verdana”;font-weight:normal;font-size:18pt;color:red }
H2 { font-family:”Verdana”;font-weight:normal;font-size:14pt;color:maroon }
pre {font-family:”Lucida Console”;font-size: .9em}
.marker {font-weight: bold; color: black;text-decoration: none;}
.version {color: gray;}
.error {margin-bottom: 10px;}
.expandable { text-decoration:underline; font-weight:bold; color:navy; cursor:hand; }

After a long research about the error and nothing found (at least not on google first result page ;)) I tried something: Putting a no args constructor on the ViewModelClass, like this:

public class DomainEntityViewModel
{
	private DomainEntity _entity;

	// Constructor that does nothing just because of the serialization issue
	public DomainEntityViewModel()
	{
		throw new NotSupportedException();
	}

	public DomainEntityViewModel(DomainEntity entity)
	{
		_entity = entity;
	}

	public String Id
	{
		get { return _entity.Id; }
		set 
		{ 
			// Do nothing 
		}
	}

	public String Name
	{
		get { return _entity.Name; }
		set
		{
			// Do nothing 
		}
	}
}

And tried again. And it worked. Here is the response now:

<?xml version="1.0" encoding="utf-8"?><ArrayOfDomainEntityViewModel xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"><DomainEntityViewModel><Id>0Id</Id><Name>0Name</Name></DomainEntityViewModel><DomainEntityViewModel><Id>1Id</Id><Name>1Name</Name></DomainEntityViewModel><DomainEntityViewModel><Id>2Id</Id><Name>2Name</Name></DomainEntityViewModel><DomainEntityViewModel><Id>3Id</Id><Name>3Name</Name></DomainEntityViewModel><DomainEntityViewModel><Id>4Id</Id><Name>4Name</Name></DomainEntityViewModel><DomainEntityViewModel><Id>5Id</Id><Name>5Name</Name></DomainEntityViewModel><DomainEntityViewModel><Id>6Id</Id><Name>6Name</Name></DomainEntityViewModel><DomainEntityViewModel><Id>7Id</Id><Name>7Name</Name></DomainEntityViewModel><DomainEntityViewModel><Id>8Id</Id><Name>8Name</Name></DomainEntityViewModel><DomainEntityViewModel><Id>9Id</Id><Name>9Name</Name></DomainEntityViewModel></ArrayOfDomainEntityViewModel>

Rework, an awesome book!

Well, this post may look like it has nothing to do with code/programming but fear not, it has everything to do with it.

A good friend of mine, Rui Milagaia, talked to me about a book called Rework. He told me the book was from some guys that worked in a software development company but in a non conventional way. And they succeeded.

Time passed and only recently I’ve read it. I love it. All of it.

Honestly, I recommend this book to everyone that wants to be something in life. It so easy to read, no fancy words, no delays. The book is divided in really small chapters (more like articles), mostly with 1 or 2 pages long.

It made me think a lot about my behavior and what I want to do with my life. And not only that, It showed me that some of my values and ideas are not impossible, that there are people out there that thinks the same and was able to achieve something. If you want to know more about the book, the official site is here:

http://37signals.com/rework/

If you know the book or the company, feel free to comment and share your opinion.

New JBrisk version 0.1.2

This new version of the JBrisk project contains only a critical fix for a bug in the JBriskTestRunner for parameterized Unit Tests. The bug was about displaying runs with escape literals arguments at the Eclipse JUnit Test Runner.

:D

Follow

Get every new post delivered to your Inbox.