EPILOGUE

image

The Future

It’s time to wrap up the notes that I’ve gathered over the last ten years. For now, I’ve also finished with my advice and tips explaining experiences that I gained when designing the NetBeans APIs. That doesn’t mean this topic is fully explored and there is nothing more to say. This is no definitive guide—I could continue to write for the next few months or years. However, that would just prevent this book from getting into the hands of its readers. Everyone I’ve talked to tells me a book such as this is needed, so I want to get it out now. But before I conclude, let me say a few last words about the future of API design and software engineering as a whole.

Part 1 presents all of API design as a scientific discipline with a strong rational background, not as the art that it sometimes pretends to be. It defines terminology and initial prerequisites that can objectively help us measure if an API design is good. These rules try to be language neutral and applicable to any programming language, not just Java. The theory is unlikely to be complete. Other principles of API design exist elsewhere or are still waiting to be discovered. However, that should not scare us, as Chapter 1 gives us a tool to evaluate the quality of various principles—to find out whether a certain piece of advice helps us design better shared libraries and their APIs or not. It gives us the grand meta-principle: selective cluelessness. This cluelessness is a tool that can measure whether various goals really help. That’s because if they allow people to know less while achieving more and building better software systems more easily, then this advice is good. There is a need for this advice, especially in the future, when software systems will outsize the intellectual capacity of any of their designers.

The biggest part of this book is dedicated to projections of such theory into Java. Sometimes these projections are widely applicable and sometimes they apply only to object-oriented languages. Sometimes they are trivial, sometimes they are complex, and sometimes they are controversial. Again, it’s unlikely they cover everything, but they provide a good starting set of API design patterns for Java. We’ve seen that it’s possible to design APIs in Java. It’s just necessary to know the meaning of various language constructs from the standpoints of evolution and the API user.

Part 3 provides tips, tricks, and descriptions of daily procedures that should be fulfilled when designing and especially maintaining APIs. This is where I expect the biggest evolution. As the tools improve, so will the practices. In any event, the future goal is clear: software assembly skills, and API design, which is software assembly’s essential part, need to be simplified and made available to the public. Let’s look in more detail at how to do so.

Principia Informatica

When Isaac Newton published his Philosophiae Naturalis Principia Mathematica in 1687, it was a tremendous moment for the history of science. This three-part book, seen by some as the best single scientific write-up ever made, contained important results and principles such as the initial definition of forces, their mutual interactions, definitions of laws of movement, and also a new mathematical tool making all that possible (differential calculus). It was an amazing achievement for a single person.

However, the most important aspect of that book was not what it contained, but how it treated things that it missed. Isaac Newton was well aware of the fact that though he managed to provide explanations for things previously unexplainable, his mechanical world likely didn’t fully reflect the real world. Today we all know that indeed it doesn’t. Still, because his work was ready for that from the beginning, it’s still useful and important. For a certain set of applications it gives good enough results, and even more importantly, can be used as a teaching tool. The theory is reasonably approachable and understandable to beginners, to whom it provides a perfect introductory step toward modern physics.

Newton was not the first person to create such a theory. Many great minds before him tried that as well, most importantly Descartes. However, humility is the difference. While Descartes believed he could explain the whole world himself, and tried that in his work, Newton notably admitted that his work wasn’t definitive at all. This not only left the doors open to successors, but let him concentrate on proper explanations of the phenomena he knew how to explain, whereas Descartes needed to explain everything. Obviously that is a massive task, and in some situations, such as when explaining interactions of two moving bodies, he could not go deep enough and his conclusions were sometimes not valid. As far as I know, this is not true for Principia Mathematica. What it contains is valid to this day.

Moreover, Principia Mathematica leaves the exploration of the whole complexity of the world for others. As such, it managed to stimulate an enormous activity among European scientists. They adopted and extended Newton’s theories and techniques in ways that their author could not predict in his wildest dreams. Because of this scientific cooperation, physics became the driving force behind the industrial revolution that pushed Europe forward in the 18th and 19th centuries.

The usefulness of Principia Mathematica extends beyond giving everyone a solid tool for describing, or at least approximating, the behavior of the real world. It’s also useful for providing strict and solid background theory that stays valid even if the world starts to differ or is known to differ from the theory. You can always verify the preconditions. If the real ones match the ones in the theory, you can apply the theory’s conclusions to real objects and get reasonable results applicable to the real world. The match is common for almost all cases we face in the world around us. Because most of the world we deal with does not rely on high velocities or microscopic objects, Newton’s theory is a universal tool for understanding our everyday real-world problems. That is an enormous success for a book written more than 300 years ago.

After reading this book you can likely see that I have deep respect for the work that Newton did and particularly for the style in which he did it, especially for its humility. I wanted to follow the same path and structure, and approach the topic of API design in the same style that Newton approached his mechanical world. I wanted to establish a forum in which others could express their ideas, and I wanted to present and explore some of them. However, I was always ready to admit that my knowledge and my advice are not finite. There is much more work to be done, and this book is just a first step. Indeed, somewhere in the shadow of my soul, I would like it to be as important of a step as Principia Mathematica. However, I do indeed realize why this book cannot be like Principia Mathematica. This book is just a journal—a collection of notes that I’ve recorded in the NetBeans laboratory over the last ten years. This is far away from the situation of Newton’s Principia Mathematica, which was result of many more years of scientific work, publications, and letter exchanges. Nevertheless, I decided to follow a similar structure. Part 1 tries to form the everlasting skeleton behind API design. It’s as objective as possible. It’s not tied to any programming language and as such it’s supposed to be generally applicable. Just like Newton took Euclid’s geometric space with the additional abstraction of forces and inserted it into the real world, the structure described in the theory should be applicable and insertable in any programming language that already exists or that will appear in the foreseeable future.

Based on this skeleton I wrote down a lot of recommendations for proper API design in Java. This advice is unlikely to last as long as the theory. As soon as Java dies or morphs significantly, it could be rendered invalid. However, at the time of creation and writing of this book, it’s important. A whole set of developers are writing their libraries and APIs in Java and they might find this advice useful. Moreover, I’ve indeed not formulated the theory out of nothing. I mostly extracted it from my real Java experience (a good example of empiricism, isn’t it?), and these Java adventures are captured in Part 2 of this book. They can be seen as guides that lead to the creation of the theory principles, or you can see them as the results of that theory. Indeed, the best scenario would be if they were both at once. However, that is unlikely, as this book is not as strict as Newton’s Principia Mathematica. There are no real proofs, and even where I attempt to make some, they’re more scratch proofs than real proofs.

Anyway, I intentionally leave the doors open for those who want to build upon this book. Use the design tips in your project, expand on them, and publish them. Use this book as the starting material for your lessons about API design, prove me wrong, and build upon that. I was not able to cover a lot of topics due to lack of time, lack of intellect, and lack of patience. You are more than welcome to explore them and to provide some hints. Following is a list of topics that I think deserve further investigation.

Cluelessness Is Here to Stay

Predicting the future is always hard, and every prediction is associated with a bit of risk and uncertainty. If I had to bet on something happening, then I’d bet on cluelessness. I’d say it is bound to conquer the software engineering world. I’ve discussed the reasons in detail in Chapter 1, but let’s pick a few and look how they can evolve in the future.

The need for more and more programmers is likely to last for a while, if not forever. The languages or tools these developers use might change. However, as long as society continues to need information, there will always be a demand for people who can help organize it. It’s unlikely that those people will be more clever or better educated than us. Still, they’ll be asked to work on bigger systems, provide more sophisticated results, and render them in more colorful ways or in more dimensions. In short, they’ll be asked to achieve more than we do today. With their skills remaining at approximately the same level as ours, it’s clear what will happen: the portion of the systems that individuals will understand will get smaller and smaller. This is a perfect environment for cluelessness, especially because we don’t want the big systems to be unreliable. Even if we can understand just a portion of them, we still want them to work properly and not get out of control. Selective cluelessness gives an answer to that problem: choose what the key aspects of these systems are and make sure they are checked and verified to work as expected. Cluelessness is here to stay.

The other reason for that is that assembling applications from big chunks is so convenient. I can see that on various Linux forums. The majority of their traffic includes tips and tricks on installing some package and tweaking configuration files, as well as samples of shell scripts that you can execute to achieve some highly desirable effect, such as having an X Window system come up after a resume. The amount of this kind of information greatly outweighs the number of tips suggesting how to patch some C source of a program, how to debug it, and so on. Again, this is an indisputable sign of cluelessness. Most Linux users don’t have a clue how their application is written. The only things they care about are its command-line options, the output that it emits, and voilà! Execute the binary, and use pipe and grep to find the important information. Then process it and the system is done and working. Cluelessness par excellence. Moreover, this also shows the importance of APIs. The command-line options and the text printed by the Unix utilities is their most important API, used by all those power Unix users, isolating them completely from the internal details of the utilities’ implementations. I know it from my own experience. Whenever I need to fix a problem on my computer, I look for a solution based on existing binaries. I even spend a day writing shell scripts, or massaging configuration files to find a solution, before opening the debugger and library source code and hunting down bugs in the C code. The barrier between these two worlds is so big, and crossing it is so painful, that most us would rather stay on the side of scripting and command-line arguments. For example, I’ve made dozens of attempts to work around some bug in X server. However, only once in the last five years have I needed to download the source code and fix some segmentation fault. Because it’s so easy to invoke and combine Unix applications and so easy to assemble them into bigger chunks of functionality, the number of users greatly exceeds the number of hackers digging inside the library’s C sources. The ratio between these two groups is so big that it’s clear proof of why simple, consistent, and easy-to-use APIs are important. This observation is indeed not limited just to the world of Unix, but is general. The more good APIs we have, the bigger the systems we’ll be able to build without understanding all their details. Cluelessness is here to stay.

The task for the future is to look for more situations where selective cluelessness can be used and find ways to apply it properly. I’ve discussed the importance of good APIs and automated testing in this effort deeply in this book. However, numerous other tools and practices will help us design more reliable systems with the current amount of knowledge we have—or even a lower or more targeted level of knowledge than we have right now. At the end, whenever I design a system, I don’t want to keep the details of why it works in my brain. That would occupy valuable space that I want to fill with memories about my friends, hobbies, family, and life. Of course I need the system to work properly, if for no other reason than not having to fix bugs in it. From time to time I’ll need to come back to it and modify it to do something. At that moment I want to recall all the important knowledge about it, apply it, and forget it immediately. Cluelessness is my friend. Anyone who discovers a technique to intensify such an approach is my hero, and certainly not just mine. Cluelessness is a friend for all of us.

API Design Methodology

When my friend Tim Boudreau still seriously considered helping me with this book, he always said, “We need to create a methodology.”

I listened with my jaw dropped and objected, “But we don’t have a methodology. We have just a set of advice. It’s not a checklist to follow.”

“Never mind,” Tim replied, “Make it a checklist, make it a methodology. Without it nobody will really listen to you!”

Well, I never felt comfortable with his point of view, but there has to be something to it, because I still keep it somewhere in the back in my mind. Now, when presenting the API design future, I’d like to talk a bit about it.

This book contains much advice, mostly just in the form of notes. It’s based on my own adventures with designing a Java framework, and as such, it’s more of a checklist of things you should not do: things that might limit your ability to successfully maintain the API of your library or framework in subsequent releases. At the beginning of my architect days, I knew nothing about proper methodology. The design decisions were mostly driven by instinct. Over time, as mistakes started to accumulate and hurt me, I started to learn and some sort of methodology started to crystallize.

I am still reluctant to formulate a methodology. I’d rather leave it for others, but I am not afraid to predict how it should look. This book contains Part 1, and any methodology should have a theoretical background. However, just like Newton’s laws don’t define how to design a bridge that won’t fall, it’s not the goal of the theory to provide a handbook for those who want to apply it to real situations. The design part of this book is mostly a projection of the theory to a particular programming language, and the daily advice part also concentrates more on tools and processes than methodology. Still, I guess that the last part can be a good initial point for exploration of an API design methodology.

Before we start our search for an API design methodology, we need to find a proper name for it. Names are the most important attributes of things and events. If we have a good name, then after hearing it the right bells in our mind start to ring, and the right bells create the right attitude, which influences the acceptance of the methodology. So, let’s look at possible good names we could use for such a methodology. The name could include “high.” High is good— definitely better than “low.” It can even be an adjective, as in “highly effective.” Yes, “effective”; let’s include effective. Costs are always too high, so let’s lower them by using a highly effective methodology. Also, let’s add some form of “structure” to the naming. Structure is always good—better than chaos—and if used in a rational way, oh yes, let’s call it “rational” as well. Then it can produce enhanced results. By the way, “enhanced” is also a strong name for a methodology that can provide consistent results. “Consistent” is not bad. I feel a kind of threat there, like a slight attack of disorder. Better to replace it with “unified.” Yes, unity is good; unity is what all our developers need! So I guess we’ve found proper strong words to call our methodology, and now we can also try to find some content for it. No; I’m just kidding, let’s start once again.

The theory, as well its applications and then the process advice given in this book, always works with the assumption that things are never perfect, that things evolve. In a perfect, static world, we could use reason to explore every possible solution and then choose the best one. It might take time, but at some point we might obtain the best possible result. This is the style of thinking you can apply to reasoning about the geometry of static objects. However, our world is not like that: our world is changing. It would be useless to search years and years for a perfect solution, when at the time of discovery, the solution might no longer be applicable to the state of our world at that time. Of course, admitting that we are not searching for the perfect solution somehow concedes most of the slickness and beauty. However, it’s at least honest, and it might simplify the search for truth.

Software engineering always oscillates between rationalism and empiricism. Sometimes it’s closer to math, where it searches for absolute and perfect solutions. Sometimes it gets closer to empiricism, where it can accept even imperfect solutions that satisfy our requirements and senses. Perception defines what is acceptable. The first school of thought has resulted in methodologies that advocate perfect planning and perfect documentation that is created before the start of coding. With a good planning phase, the coding is just a piece of cake. This is a perfect tool for a world that is static or perfectly known. However, both our world and our knowledge of it are changing. That is why there is another movement that resulted in a set of methodologies pioneered by Extreme Programming (XP). These methodologies inherently admit that our knowledge is never perfect, and advise how to deliver successful systems even with such a disadvantage. My impression is that this stream of thought has grown and became strong in the last decade.

When thinking about various approaches to API design, I can see that the range of methodologies also spans and oscillates between rationalism and empiricism. I have deep respect for both these approaches, and like it when truth is also beautiful, as that makes me feel it’s more elegant. However, as you know after reading this book, beauty is not a necessary condition of truth. We can achieve good API design without fully understanding everything, using only partial knowledge about the world just as methodologies such as XP do. Now, I believe our search for a proper name for a future methodology based on the style of API design advocated in this book is over. XP belongs to a group of software methodologies called agile. Let’s call our methodology Agile API Design.

Agile API Design is a pretty slick name built from strong words. It rings the right bells in the mind and it’s not an empty name—the name is backed by content. I swear that I didn’t think about the name before I started to write this last futuristic part of the book. However, when I look back on all the advice given in prior chapters, I find it absolutely perfect. All those refrains such as “the first version is never perfect,” “know your users,” “get ready for future evolution,” and so on are parallel to the advice given by agile software methodologies. I am still reluctant to provide the actual methodology here, but if you use advice from this book when developing your library or framework, feel free to tell everyone that you are following Agile API Design principles.

Languages Ready for Evolution

The advice provided in the book intentionally tries to stay within the boundaries of Java. It would indeed be possible to think about various language extensions that could make the burden of API design a more pleasant experience. However, that would be just like creating a new programming language. Such a language might be easier to use and program in, but it would be harder to adopt. That’s because the software industry is usually conservative in the selection of the programming languages that are used for coding of its software projects. This is not surprising. A new language requires new skills, and it takes time to train developers to have them. That is why it seemed important for me to demonstrate that even in plain Java, you can practice Agile API Design. However, getting your API ready for evolution sometimes requires a bit of insight into the internals of the compiler and the virtual machine in order to do the right tricks to make things work correctly. Sometimes these tricks are ridiculously complex (such as the accessor in the section “Allow Access Only from Friend Code” in Chapter 5 or the access modifier in the section “Delegation and Composition” in Chapter 10). That is why it makes sense to ask how some future language or software build system such as Maven could turn these tricks into natural coding constructs.

I’ve intentionally used the word “future,” as I am not aware of any language or system that can provide that functionality right now. The most that you can find now are homemade systems that help with various aspects of the API design, but there doesn’t seem to be any other general solution.

Some parts of the solution should be in the compiler, or at least reflected in the sources, and processed by some annotation processor later. For example, it should be possible to tell the compiler that when you compile with JDK target milestone 1.5 no methods and classes introduced in subsequent Java releases should be available. This should work for any library, not just rt.jar. The solution is easy, if we can identify the state of an API in a particular version. For example, our libraries and their APIs could annotate each class, method, and field with some @Version(1.4) attribute. If the compiler paid attention to this attribute and ignored everything newer than the version of library you want to compile against, this should generate a “not found error.” Of course, this would require that each element visible in the API be annotated with the @Version attribute. This means that the packaging and versioning becomes part of the language. That is a significant mental shift that no compiler guy I’ve spoken with has been ready to make. I know language designers have thought about modularity. However, that is a modularity created for the Modula programming language back in the ’70s. That kind of modularity solves how you can compile the program part by part, module by module, but it doesn’t prescribe what should happen if one part changes. It’s a kind of static form of modularity, not dynamic assembly from independent parts. It might have been a step forward 30 years ago, but since then the world has become more agile and we should adjust our languages to that.

Better compiler support would be nice, but not everything can be done in a compiler. The system has to be more complex. It needs to be aware of its own history. For example, it should yield an error when you try to remove a method or class from an API and it has already been published in a previous version. For that it’s necessary to have a snapshot of all important previous versions and let the compiler or other part of the system check that no violation of binary backward compatibility has occurred. Again, this needs to work in orchestration with proper version numbering. There needs to be a policy that enables you to express that the new version of a library is completely incompatible with the previous one, in which case the binary compatibility check would be suppressed completely. As in many cases, there have been attempts to provide tools for such kinds of functionality. However, they are not general enough, and they need a lot of manual configuration and intervention to set them up properly. The language or system of the future should make this instantly ready whenever you start to develop a new library.

Yet another thing to consider is whether the access modifiers such as public, protected, and final are not obsolete. Given all the discussion in the section “Delegation and Composition” in Chapter 10, I consider this likely. For the purposes of designing an API, I would much rather see a way to specify whether a class or interface is supposed to be subclassed, and potentially restrict who can do so. Then for each method you could either specify if it’s supposed to be callable or if it’s a slot where developers can or have to inject their own code. No method should have dual meaning. Where there is a dual meaning, describing the second meaning should be more complicated than describing the more likely case. For example, the way to express a callable method—for example, public final—should not be more complicated than public, which has a dual meaning. I am not advocating removal of access modifiers. However, I want them to be more aligned with the tasks people do when designing an API. If the access modifiers were primarily designed to express people’s intention with an API element, then the infamous and dangerous “reuse by accident” coding style, so common in current mainstream object-oriented languages, would be prevented. I don’t want to prescribe how the access modifiers should look, but I know that the current situation requires so much attention when designing an API that we desperately need something more clueless: something people will use without so much time needed to think about it; something that will make it hard to misinterpret the author’s intentions.

Although I was not polite when talking about compilers in this section, I don’t mean it as a rant. Moreover, I’ll be glad to be found wrong. If there already is a system and language suitable for Agile API Design, I’ll be glad for that. If not, I’d like to ask the language designers to think about their solutions from this new, agile angle. In the meantime, I am looking at what can be done from the outside, if you own the runtime and build system, like we do in the case of the NetBeans Runtime Container.

The Role of Education

Cluelessness is all around us. Are we ready for that? Do we teach people about it? Do we tell them how to build gigantic applications from tons of libraries picked up from all over the world? I am afraid that the answer to all these questions is “No,” and I’d like this to change in the near future.

From time to time I visit various universities and present to their students. I still live under the impression that the schools would rather teach basic coding skills than advocate and explain how to do code reuse. Unsurprisingly, the universities seem to prefer the rationalistic approach, offering a learning journey that is enlightened by beauty and elegance. Of course, it’s perfect if programmers know how to write quicksort or understand a bunch of graph-related algorithms. However, that is not everything they should know.

TEACHING SKIING

By chance, I once attended a ski instructor course. I didn’t have time to do the exams, so I am not certified to teach new skiers and I have to earn my living being a software architect, but I do know the methodology well enough to build this thought on it.

A ski instructor always starts with teaching basics to ensure people are able to stay on their skis and at least generally understand the technique for making a turn. After a few practice hours, the instructor usually has to divide the students into two categories: one style optimized for “survival” and the other for “racing.” With both styles, you greatly enjoy skiing and can get down any hill, However, only with the latter group can you feel the centrifugal force that is one of the biggest reasons why people love riding motorbikes, skiing, and snowboarding.

I believe a similar teaching style should be used for programmers as well. We need everyone to understand the basics, which in my opinion should include the principles of selective cluelessness. Only that way can we guarantee that regardless of how good programmers are, the systems that they produce are reliable. After a while there should be a fork in the road. One path should lead to the mastering of practical skills such as reusing foreign components and orientation in legacy code. The other should be a more “academic” path oriented toward discovering new ways of applying cluelessness, producing more sharable libraries, and so on.

I’d like to end this skiing parable with a slightly unrelated, but interesting observation: good tools help. When I was young, it was not easy to practice the “racing skiing style,” as the skis were not optimized for turning. When I tried snowboarding for the first time in 1996, I almost immediately switched and gave up on skis for years. However, ski makers caught on with the trend, and these days, with carving skis, turns are easy. The tool improved, and as a result the ratio of the “racing skiers” improved as well. These days, the number of skiers exploiting the centrifugal force is much higher than ten years ago. Good tools help to make higher standards more easily accessible to the masses, and that is the reason why we need good tools for API design as well.

We need to teach both camps. We need people to do the “science”: to discover new principles and algorithms, and to find methodologies, rules, and important points of doing clueless development. However, we also desperately need people who will be able to do a good and reliable job—those who will live the life of selective cluelessness. However, this all needs to be taught. Otherwise, as soon as students leave the university, they’ll find that the software engineer’s life is different from what they expect. Not everyone is able to finish school and then create and work for ten years on their own framework. Most students start a job and are handed the task of maintaining code written by someone else. This is something for which the university doesn’t prepare them at all.

BEING AFRAID OF FOREIGN CODE

I’ll go so far as to claim that these days students are afraid of code they didn’t write. Last year I taught a course about the NetBeans Platform at the Johannes Kepler University at Linz. Part of that course was to finish a small project. I gave the students three options: build a new module to add some functionality on top of the NetBeans Platform, find an existing module that is missing some kind of functionality and modify it or patch it to do something new, or fix three bugs. In my opinion, the simplest of these tasks is the last one. NetBeans has a few thousand open bugs and many of them are easy to fix, just not important enough to justify the developers’ time to fix them. Often that might be just a single line fix. Then, the second simplest task is to enhance an existing module. The already existing code can serve as a sample. It’s just necessary to plug into it, but you don’t need much knowledge—in most cases you’re led by example. It’s much easier than writing something from scratch. However, guess which task the students took? Nobody fixed any other bugs, one person donated a patch to some module, and the rest wrote their own code. Simply put, students are afraid of foreign code. That is not good news for them, because as soon as they leave the university, they’ll spend most of their work time digging out bugs in legacy code they inherit from someone else.

The other thing to nitpick regarding the current state of computer science education is the way that it measures the quality of student code. At the time I attended my university, we just wrote a program, showed it to the teacher, and either got approval or not. However, at that time access to the Internet was rare and the open source movement was not as strong as it is today. That is why I would expect some progress since my university days, especially given the power of the Internet and the number of existing open source projects seeking contributions. However, there doesn’t seem to be any progress. Students still create their own projects from scratch, show them to the professor, and that’s it. The project is then forgotten. It would be much more valuable to show students how to work with existing code. For example, they could be evaluated on the integration of their solution into some existing open source project. The best grade would be for someone who manages to get the code into the project’s code base, as that is not just about coding skills, but about communication and the ability to work with the rest of the community. Average grades would be for those who make the project work, but whose work is refused for integration as not being good enough. This could be beneficial for teachers, as most of the evaluation is done by the members of the community. Still, I have not found any signs of this.

The role of education is important for developing new engineers who will take over our projects. We need them to be ready for the task, to be able to maintain existing code, to be able to operate in selective cluelessness style, and to be able to assemble their solutions from massive building blocks. Everyone should have the skill to consciously work in cluelessness mode. Everyone should be able to evaluate whether massive building blocks such as libraries or frameworks are ready for reuse or not. Also, they should have the “selective” part of cluelessness; for example, they should understand that nobody works on a project forever and that the knowledge of the project is hidden in its automated verification tools and tests. Although taught to be clueless, these engineers should be able to make their knowledge “buildable.” For example, they should not be afraid to deep dive into the Linux kernel, debug the NetBeans Platform sources, and so on.

Share!

The era of Agile API Design has just started. This book is just a beginning. Knowledge related to proper API design will evolve, and I hope that I’ve given it a good initial boost by writing and publishing this book. However, it’s necessary for others to build on this base and share their findings. Just as ten generations of physicists enriched, improved, and extended the work done by Newton, many more people will need to come and share their work to make Agile API Design the design choice for the future. As sharing is important, and as today cooperation can happen more easily than in the times of Isaac and company, I’ve registered a domain that can be used for discussions, corrections, and add-ons to the ground formed by this book. Please visit http://agile.apidesign.org and join us with your comments and thoughts. I’ve rented the domain for the next 3 years, but in case this book receives any attention, I am ready to prolong it for the next 300 years. Enjoy cluelessness and API design.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
52.14.130.13