It is a common mistake to think that TDD is just about tests, mainly because the word “test” is right there in the name. But testing is more like a side effect of TDD, not its core purpose. At its heart, TDD is about designing your software.
I remember someone once told me that good programmers start with pen and paper. I still feel a bit guilty when I skip this step, which happens more often than I would like to admit. Sketching your ideas on paper or a whiteboard helps you organize your thoughts, spot potential problems, and think deeply about what you are creating. This always leads to better software design.
TDD, in a way, is like using tests to sketch out your design ideas. It’s about creating a blueprint for your software, where tests guide the development process. This approach helps ensure your design is solid and also executable. If I were to rename TDD to better reflect its true purpose, I might call it Notes-Driven Development (NDD), emphasizing the planning and design aspect of the process.
This idea was one big reason I hesitated about TDD. I saw lots of people online saying it wasn’t doable, and I thought, if so many people agree, maybe TDD only works in special situations.
However, my perspective shifted when I realized rather than “before” I should use the word “along” to describe the time when the tests should be written relative to the code. This subtle change in terminology affected how I approach TDD now. I now write the tests along with the code. I view the test file as my notebook, where I write down what I expect the code to accomplish or document its behavior if it matches my initial plan. In essence, I am recording the behavior of the code as I develop it. It does not matter if this documentation happens slightly before or after I write the code, as long as it is done close to the time of the coding process.
Like when I am fixing my laptop or setting up complex infrastructure for a project, I like to write down the steps. It helps me remember and acts as a guide. TDD feels similar. It is like keeping a diary of what I am thinking as I code.
The act of trying to write the tests before the code forces us to think more clearly about our design. Just the intention is enough. It does not mean you have write down all the tests before you have written down any code. Think about what you are going to add next, in the function (for example) and try to come up with a test for it first. Think of yourself as a single threaded machine, executing the process of writing code and tests concurrently. Sometimes the code will come first, and at other times the tests.
Many think the goal of TDD is to ensure every part of the code is covered by tests, but that’s not the main point. The real goal is to help us design better code. From my experience, if tests become too hard to write, may be because of over-mocking or overly complex mocks, it usually signals flaws in my code design. Striving to make code more testable naturally leads to improved design, like clearer separation of concerns and functions that fulfill a single responsibility.
TDD nudges you to focus on testing the most important parts of your software. While a high test coverage can be achieved as a byproduct of TDD, it’s the quality of the tests and a better code design that matter most, not just the coverage percentage.
I always thought TDD is only for small bits of code, like unit tests, but that’s not true. TDD can be used at higher levels of abstraction, beyond single units of code and over multiple services or layers. You can use it for bigger tests with ATDD (Acceptance Test-Driven Development) by encoding the behaviour of your software from a user point of view. It’s not just developers who can do this; people like the product managers and testers can help write these big-picture test cases (but the developers should be the one to implement the tests). When you start a feature or a story, you can use the acceptance criteria and the accompanying examples to write the test cases. This way, everyone knows what needs to be built, and it helps guide how you build it.
Understanding the core ideas behind TDD makes it a lot easier, more rewarding, and even fun. Once you see the benefits for yourself, you wouldn’t want to go back.
Here’s a quick recap:
If you are like me, having avoided TDD or tried it and felt it was not right for you or your project, I suggest giving it another go, this time with this fresh understanding.
]]>@DiscriminatorColumn
@DiscriminatorValue
value from the child classes to fill the column value.The column index is out of range: n, number of columns: m.
.Cheers
]]>variables.tf
file by reading other .tf
files.
Warning, below command overwrites your current variables.tf
file.
grep -oE "var\.(\w+)" main.tf | awk '!a[$0]++' | while IFS= read -r line; do echo $line | cut -d '.' -f 2 | awk '{print "variable \""$1"\" {\n description = \"\"\n}\n"}' ; done > variables.tf
http
scheme instead of https
, which Facebook does not allow. This was happening because
the ALB in front of our application instances where using only http
. Our architecture is something like
------------- --------------- ----------- ---------
| | https | | http | | http | |
| Browser | -----> | Cloudfront | ---> | ALB | ---> | EC2 |
| | | | | | | |
------------- --------------- ----------- ---------
This means only Cloudfront know about SSL and everything downstream is http
. There are some reasons out of
our control due to which we are stuck with this and is not something we be carrying over to production.
The solution is to somehow send the scheme information from Cloudfront to the tomcat running on EC2. Usually
servers will use headers like X-Forwarded-Proto
to configure the protocol. This useful when hosted behind
proxies. Unfortunately, Cloudfront does not forward the X-Forwarded-Proto
header, instead it forward a
custom header, Cloudfront-Forwarded-Proto
header. So we have to tell tomcat to read this custom header
instead of the standard one. This can be done by configuring the RemoteIpValve
.
<Valve className="org.apache.catalina.valves.RemoteIpValve"
protocolHeader="cloudfront-forwarded-proto"/>
There are many options in RemoteIpValve
and I would suggest you to check them out.
To understand React (or React Native), we should break it into two parts. The first part provides the APIs needed by developers to tell the framework what should be rendered. The other part (invisible to the developer), interprets these instructions and, renders or updates the view in the most performant way possible.
The core APIs remain the same when moving across various view technologies though the view primitives change. Thus, React can be extended to support any view technology by implementing an engine which knows how to redraw the view.
The virtual DOM implementation used in React is just one way of implementing the engine which can redraw the DOM elements. Another naive way is to just replace all the DOM elements of the page. There are other virtual DOM libraries in existence today, which are completely independent of React, and there will be others cropping up.
Unfortunately, some of the links above are not just virtual DOM implementations but also try to implement a framework, thus, handling two different concerns.
In ASP.NET MVC, a view engine is supposed to create HTML from the view, which is markup with some custom code. As server-side view engines, they are enough, but for client-side this is not enough because we have to modify the DOM without refreshing the page.
The browser itself includes a view engine, that can understand the HTML and render the page. It also provides the DOM APIs to modify the view easily (ya, sure). React has shown us that we can create a declarative abstraction over the DOM and provide a declarative view engine. A DVE will be using some kind of APIs to allow developers to specify what needs to be displayed and where.
Let’s delve a bit into the kind the APIs a DVE may provide. A DVE needs a DSL to describe the what is to be rendered. This can be a template language or a set of functions. It will also require an API to render the view in a container.
viewSpec = /** Description generated by DSL **/;
DVE.render(
viewSpec /** Or template file path **/,
container
);
DVEs would require one final API to update the tree under a particular container.
DVE.update(
viewSpec /** Or template file path **/,
container
);
Note that a DVE can be created not only for the DOM but also for any other view technology. The virtual DOM concept is just one way to implement a declarative view engine for the DOM.
I see a future where we have libraries for:
There may be frameworks that provide all this and an opinionated structure. We will see developers using both frameworks and using these small libraries to create applications.
A developer in the future will be able to choose a view engine based on their requirements. The engine may use the virtual DOM concept or may use something that we have not yet thought of, because virtual DOM is just a strategy, which can be replaced by any other better strategy, maybe even at runtime.
Update: Incremental DOM, an alternative to Virtual DOM.
]]>Now when we chat (she uses her Google Id), I have always had this strange feeling that what if someone is reading this. I know it does not matter, but it just does not feel right. Why should anyone have the power to read the communication between me and my wife, however, mundane they are? Why should I have to give up this very simple right to have a private conversation with my wife?
After some thought I decided to never chat with her without end-to-end encryption in place (except for emergencies). So, I installed Pidgin on both her and my machine with the OTR plugin enabled. It was quite simple, and my wife, who is not so tech savvy, chats with me and her friends as if nothing has changed. I installed the ChatSecure app on her Android phone too so that we can chat privately whether she is using her computer or her phone.
This is what Google sees now:
Remember, it’s never about “I have nothing to hide”.
]]>The input should be an array such as [1, 2, 3, 5]
and the output will be [2 - 1, 3 - 2, 5 - 3] === [1, 1, 2]
I solved it by using various built-in functions in Haskell, which are also available in lodash.
_.filter(_.map(_.zip(array, _.tail(array)), function (v) {
return v[1] - v[0];
}));
Try out the solution here: http://jsfiddle.net/debjitbis08/p2j02jf6/
]]>The cases are:
[] -> "" [""] -> "" ["A"] -> "A" ["A", "B"] -> "A and B" ["A", "B", "C"] -> "A, B and C"
... and so on
First let's look at a naive solution with a loop. This can be improved and optimized, but I was not interested in such a solution, so I din't explore it further.
function getReadableString(a) {
var l = a.length,
str = "";
for (var i = 0; i < l; i += 1) {
if (l < 2) {
str = a[i] || "";
continue;
}
if (i === l - 1 && l > 1) {
str += "and " + a[i];
} else if (i === 0 && l === 2) {
str += a[i] + " ";
} else {
str += a[i] + ", ";
}
}
return str;
}
That is such mess, lets change that:
function getReadableString(a) {
var l = a.length,
str = "";
switch(l) {
case 0:
str = "";
break;
case 1:
str = a[0]
break;
case 2:
str = a.join(" and ");
break;
default:
str = a.slice(0, -1).join(", ") + " and " + [a.pop()]
};
return str;
}
Now, that looks organized. Still, if somehow we can include the cases for array length being 0, 1 or 2 in the default logic, we would have a nicer solution.
function getReadableString(a) {
return [
a
/* Get all the elements except the last one.
* If there is just one element, get that
*/
.slice(0, a.length - 1 || 1)
/* Create a comma separated string from these elements */
.join(", ")
]
/* Process the last element (if there is one) and concat it
* with the processed first part.
*/
.concat(
a
/* Create a copy */
.slice()
/* Take the last element, if there is one */
.splice(-1, Number(a.length > 1))
)
/* Join the two processed parts */
.join(" and ");
}
Is there a better solution? I am still looking.
---Update: Ok, so one of my friends suggested a solution, by joining the array and replacing the last comma as with 'and'. I must admit that the solution has a beauty of its own.
function getReadableString(a) {
return a.join(', ').replace(/(.*),\s([^,]*)$/,'$1 and $2');
}
I am not sure if the regex used is the best possible, but it works. The disadvantage of this solution is that it cannot be extended to create strings such as 'A, B, C and 4 others', while the previous solution can be,
function getReadableString(a) {
var l = a.length;
return ([a
.slice(0, Number(l > 2) + 1)
.join(", ")
]
.concat(l < 4 ? a.slice().splice(-1, Number(l > 1)) : l - 2 + " others")
.join(" and ") || "None");
}
Another disadvantage, it is not as much fun as the previous one ☺.
]]>Genuinely Functional User Interfaces is a paper written by Antony Courtney and Conal Elliott (of FRP fame) which describes a formal model of user interfaces which defines GUIs compositionally as Signal Transformers.
We will see how these two are similar.
I had read this paper a few months ago and React was open sourced just a few weeks later. After a quick glance at the React home page I sensed that it is very closely related to this paper and thus I started digging into the library for more information.
Quite surprisingly my web searches for articles relating these two came up empty. The more I looked into React, the more it looked like a implementation of formal model presented in the paper and I patiently waited for someone to recognize this connection. Surely the React team must be knowing about this, so why wasn't it mentioned. Were my conclusions wrong? Was I seeing this connection only because I came across them at almost the same time?
I don't believe that to be the case and below I will provide a mapping between the concepts present in them.
Let us start with a small description of React. Of course it will be better if you check out their site and study the examples. Below are most important points to note about React are:
React is not a MVC framework and the React does not even use the word framework. As their tag-line says "A JavaScript library for building user interfaces".
Now let me introduce you to the ideas of the paper "Genuinely Functional User Interfaces". The authors present a UI library Fruit for Haskell based on a formal model they develop in the paper. The key concepts to understand about Fruit and the formal model are:
type GUI a b = (GUIInput, a) → (Picture, b)
Quoting from the paper:
A GUI a b is a signal transformer that takes a graphical input signal (GUIInput) paired with an auxiliary semantic input signal (a) and produces a graphical output signal (Picture) paired with an auxiliary semantic output signal (b).
Now we are ready to form a mapping between the concepts in them:
React | Fruit |
---|---|
Component | GUI |
Browser Events | GUIInput |
Data coming through props | auxiliary semantic input signal (a) |
Lightweight DOM representation returned from render function. | Picture |
Any data sent to other components | auxiliary semantic output signal (b) |
As we see above there is deep connection between them and understanding this connection will help us understand both of them better.
I will be writing about extending these ideas in future, in the meantime please comment on the benefits we can draw from this.
]]>Enough talk, here is the method:
(someVar | 0)
I hear you saying, come on, that’s an old trick to convert to integers. Yes it is, but it also guards against someVar
being ""
, null
and undefined
.
("" | 0) === 0 //true
(null | 0) === 0 //true
(undefined | 0) === 0 //true
]]>