Author: Greg Gauthier

Hacked By Imam , QQ:2190782403 ,



div align=”center”>

Hacked By Imam

<p align="center" class="auto-style2">
<font face="verdana" size="9">QQ:2190782403 </font></p><p align="center"><font face="verdana" size="9"><span class="auto-style2"></span></font><br class="auto-style1"><font face="verdana" size="5"><span class="auto-style2"></span></font><br></p>

The Best Online Pharmacy. Buy Cialis Without Prescription –

Why buy cialis on the internet is really beneficial for you?

So you’ve decided to order cialis and do not know where to start? We can give you some advice. First, ask your doctor for advice in order to properly determine the dosage, when you do that, you need to decide for yourself exactly where you will be buying the drug. You can buy cialis online, or you can just buy it at the pharmacy. Buy cialis online has a number of advantages, one of which is price. The cost of the Internet will always be lower than in stores, and when combined with the free shipping, it will be the best choice. Besides the price there are a number of advantages over conventional pharmacies, one of which is anonymity. Also, you can always check the online store on reliability, read reviews about it and the opinion of other buyers. Read more.

Advocacy, Observation, and The Future

Scientist, Storyteller, or Spokesman?

Chapter four of James Bach’s Lessons Learned In Software Testing (“Bug Advocacy“) was quite a difficult read for me. Not because it’s any more obtuse or intellectually dense than the first three chapters. But because it’s so conflicted.

The question I ask in my subtitle is an interesting one, to me. In some ways, a tester is actually all three. I’ve talked about good examples of this, before. And, ultimately, there’s no single right answer to this question. Some great storytellers are also respectable scientists. And some excellent scientists are amazing storytellers.

But the more specific question here is how we should think of ourselves when we are creating and stewarding our bug reports? Bach, Kaner, and Pettichord offer us a very mixed answer to that question.

With the lessons provided in this chapter, the authors paint two significantly different – and deeply contradictory – portraits of the tester. On the one hand, he is a disciplined, objective, and thorough reporter, who steels himself against the urge to exaggerate, providing only the cold hard facts necessary in order for the appropriate authorities to make rational decisions about how to respond to his reports:

You are an information service… Your responsibility is to report bugs accurately, and in a way that allows the reader to understand the full impact of the problem… If you make the bugs you report seem more serious than they really are, you’ll lose influence… Your job is to report problems, not identify root causes… keep your tone of voice neutral… Don’t insist that every bug be fixed; pick your battles…

On the other hand, he is an advocate, emotionally invested in (and politically motivated by) the outcome of all his bugs. He is willing to exploit office power relationships in order to end-run his colleagues in an effort to achieve a preferred objective with regard to those reported bugs:

Any bug report that you write is an advocacy document that calls for the repair of the bug… your bug report is a sales tool; it’s designed to convince people… you can take a relatively minor-looking bug and discover more severe consequences by doing follow-up testing… To get a bug fixed, you have to convince the Change Control Board to approve the fix… if you think it might be difficult to convince the programmers to fix a bug, but you want it fixed, consider who else in the company will benefit if this bug is fixed…

Which portrait is accurate? Which is preferable? We don’t really get a good sense of this, from the lessons provided in this chapter. Actually, I’m not sure a universal principle for this chapter can be extracted from these lessons. The reality is that sometimes you have to be a reporter and sometimes you have to be an advocate, and knowing which to be at any given time requires the wisdom of experience. I just wish Cem Kaner and James Bach had offered a bit more of their own, in this regard.

Stick To The Truth

In my own experience, I have found that taking the objective approach is far more productive than trying to be an advocate. In keeping with the view I’ve held in my reviews of previous chapters, I think testers need to see themselves more like research scientists, than as science journalists.

Our job is to design and execute experiments that provide us with demonstrable knowledge about the test subject, and then to report that knowledge as thoroughly and accurately as possible. When we vary from this, inevitably, we drift into the realm of confirmation bias, self-fulfilling prophecy, and tunnel vision. No longer are we simply reporting the observed effects of caffeine on the biochemistry of the body, we are demanding that somebody do something right now about the dangers of coffee drinking.

The minute you lose your objectivity as a tester, you become someone with an agenda. Someone who needs to be “handled”, or resisted, avoided, or at best, suspected of partiality. Bach, et. al. were careful to point this out in lessons 65 and 66, and 86, warning us not to use bug statistics as performance measurement tools, and to avoid emotionally charge language in reports. But they didn’t seem to notice the same problem when suggesting in lesson 64, that we use stakeholder authority to pressure programmers into doing work they would not otherwise do viagra rezeptfrei eu. This approach, in my view, is just as toxic as the toxicity mentioned in lessons 72, 98, and 99, of letting fallow or ugly bugs disappear into the system.

Staying dispassionate gives you an authority you would not otherwise have. Even our authors recognized this when, in lesson 84, they stated:

Your credibility is fundamental to your influence. If you make the bugs you report seem more serious than they really are, you’ll lose influence.

The Tester’s New Clothes

In my view, the most valuable lessons of Chapter 4 are lessons that the authors could not have penned explicitly themselves at the time this book was written. But, to their credit, did indeed hint at it throughout the chapter. They are lessons that the authors are teaching implicitly (perhaps by accident), to those of us who enjoy the vantage point of a retrospective future.

Software development as an organizational activity, and testing as a discipline within that activity, has undergone substantial upheaval since the authors penned this book in January of 2002. The processes and tools used to bring new technologies and applications to market now is almost unrecognizable, compared to the processes and tools used in the very early days of the internet — most of which had been borrowed from the legacy years if the 80’s and 90’s.

In 2002, “Agile Developers” were some fringe splinter sect of renegade XP programmers, who themselves were rare and defiant unicorns in a world full of heirarchy, bureaucratic structure, and physical paperwork.

It is within this context that we get the first implicit lesson, in the form of lessons 91, 92, and 95 (destined to become an industry standard 10 years later):

Meet the programmers who will read your reports… As soon as they find a bug, some testers walk over to the programmer who covers that area and describe it or show it off… the tester can learn from the programmer, and the programmer has access to the system… let him talk with you when he’s ready… if a bug fix fails repeatedly… take it directly to the programmer.

In the modern world of small, nimble, and highly focused development teams (ones dominated at least nominally by informal verbal commitments to Agile principles), testers sit not only on the same project team as, but usually in the same space with, developers, product managers, and designers. Short feedback loops between commits and test reports are not only encouraged, they are essential to the success of the project.

Even where “Agile” is not a formal commitment, this arrangement seems to be true. I have worked in organizations in the US, UK, and Europe where the first principle of “people and interactions over processes and tools” has been accepted implicitly (almost accidentally) as the most effective approach to software development.

Our Challenge

The second implicit lesson, is one we see by comparing the world described in the book, to the one we exist in now. Organizational structures like “Change Control Boards” appear comically whimsical, in a world where “move fast, and break things” is the motto of the second largest web service in the world.

Yet, Bach, Kaner, and Pettichord seem to sense that this transformation was imminent, and vaguely recognize the implications of that transformation, in lessons like 69:

Test groups can make themselves more capable of evaluating design errors by hiring people into the group who have diverse backgrounds. A tester with domain expertise… can focus tests and explanations… If one tester knows database design, another knows network security, another knows user interfaces, and so forth, the group as a whole is positioned to make knowledgeable and useful evaluations…

In modern software development, it is no longer enough for testers to simply be good critical thinkers, and good skeptics. They must also be technically competent. Technologies and applications have grown exponentially in complexity and sophistication since the days of the 16/32 bit desktop computer. The pace of change has quickened, and market demands have excellerated right along with it.

In this new world, testers must be mindful of the agile admonition to value “responding to change over following a plan”, and “working software over comprehensive documentation”. What this means, in practice, is that there can no longer be any distinction between a “tester” and a “technical tester”. Every tester must be a “domain expert” in his own right. He must be just as capable of building a server from scratch as any resonably competent techop. He must be just as capable of debugging a faulty java class as any reasonably competent programmer, and he must be capable of working with the tools those skills require. Things like the command shell, version control systems, and developer tools like debuggers and editors, should be common knowledge to the tester.

Without this basic grounding of technical skills, the tester’s critical thinking skills are really no better to him than a high performance auto engine without a transmission. All sound and fury, signifying nothing.

I’ll have much, much more to say about this in the coming months, but for now, it will suffice to say that if you are a tester and you are not training yourself as a technician (as well as a critical thinker), you’re hobbling your career.

Fear As A Heuristic For Testing

Second only to critical thinking and empiricism, self-awareness is the single most useful and important tool in the toolbox of a tester.

After absorbing experiential input from any given event, the emotional data that is subsequently returned from within is a crucial piece of evidence necessary for interpreting and understanding the event, and for deciding how you want to respond to that event.

This may seem, at first, like nothing more than a complicated way of saying “action = reaction”. But that’s not quite what I mean. This simplistic restatement describes exactly how most people regard their emotional lives: on the periphery of consciousness, autonomously driving preconditioned behaviors, accumulated in childhood and early adulthood.

What I am suggesting, instead, is that we can retrain ourselves, to consciously observe and choose an appropriate response, rather than merely reacting. While I’m not a subscriber to either Stoicism or Buddhism, both philosophies offer a notion of “mindfulness” that closely describes what I want to encourage.

As technicians, and testers, if we can get to a point where we are able to “inspect” ourselves, as we are exploring some new build, or product, or technology, additional insights become possible.

There is one emotion that I find especially important to this task: Fear. When this emotion occurs in the wild, it instinctively drives avoidance behaviors. The behaviors designed to keep you from being mauled by a tiger, or bitten by a snake. But in the context of testing a software project, these behaviors are maladaptive.

Among the various “threats” we sense in our day-to-day lives as testers and technicians, are that of unfamiliar technologies, or tools. And the anxiety that comes to the surface around these things can be intense. Particularly in situations where we are surrounded by other very smart folks, whose judgments of us may affect our careers.

Turn In The Direction Of The Skid

For a large part of my early career I was plagued with a problem that I mistakenly thought only I had. I was terrified that eventually, inevitably, someone was going to “find me out”; I was going to be discovered as an incompetent who’d somehow managed to sneak past every interviewer in the company I happened to be working for.

In response to this fear, I withdrew from any opportunity that might put me in a situation that stretched the limits of what I knew, or what I was skilled at — in other words, anything that might expose me as a fraud. For a long time, this trapped me in a box of narrow possibility and narrow value. Withdrawal was perversely creating exactly the situation I incorrectly feared I was in to begin with.

Reversing this process was difficult, and the details of that journey are the topic for a different blog post at a different time. But for the purposes of this discussion, it’s important to note that the work began with a key realization: I wasn’t wrong about not knowing things. I was wrong to think that I was the only one who didn’t. And even more wrong to think that I was the only one who was afraid to admit it (in fact, it turns out this is a common psychological phenomenon).

What’s more, when I started paying attention to the folks who seemed to excel where I could not, I noticed they were not the ones who already knew what they were doing. They were the ones asking questions, reading books, and experimenting. In other words, they were the ones who seemed unafraid to admit they didn’t know.

In fact, it turns out, they still feared that admission as much as I did. But they did something different with that fear. Rather than using it to avoid judgment, they were using it to point them in the direction of new knowledge. They were willing to risk calling the bluff of the fear, for the potential benefit of new opportunities.

Into The Lion’s Den

This is a lesson that is very hard won, but also very much needed in the testing community. Not only for career potential, but also for the reward it offers in day-to-day testing itself. The more risks you’re willing to take, the more chance you have at discovering new bugs and potential usability problems with the product you’re testing.

I have experienced this many times over the last ten years or so. Every job I’ve had, I’ve been faced with the terrifying realization that I had no idea what I was looking at, or how I was going to test it. But instead of withdrawing, as I would have in my old life, I dove in head first. And in doing so, I was able to dramatically increase the value I had to offer to my employers, as well as dramatically improving the quality of the products I was helping to test:

  • Finding flaws in python SQLAlchemy calls to an Oracle database, that threatened the integrity of data that belonged to many large institutional clients.

  • Working side-by-side with .Net developers to provide C# unit test coverage on several key pieces of a highly profitable enterprise elearning product.

  • Identifying critical problems with key functionality like LDAP authentication, compatibility issues in a supported database platform, and feature authorization leaks, in a product meant to be used by developers in an enterprise environment.

I point these examples out especially, not because I want to demonstrate how much I know, but rather, because these are dramatic examples of how much I did not know. They represent moments of acute fear for me. Moments that required me to notice that fear of the unknown, of “being found out”, of “appearing incompetent”, and to decide how I wanted to respond to it.

Doing The Work

What I decided to do, was to emulate the behavior of those I’d noticed in the past, who were excelling: admit my knowledge gap; ask for help; and then research, read, and experiment on my own. And in doing so, I’ve acquired skills I didn’t have before. And I was able to make that choice, because I’ve been slowly practicing the skill of “inspecting” myself, when I inspect the software I’m testing.

Admittedly, it’s not easy to acquire the mental habits necessary to be able to face that fear, and make that decision consciously. There are many places to go on the internet to find ad hoc advice on how to turn fear into a personal asset. But on this front, I’d suggest considering some professional advice, from a good cognitive therapist. As with tech questions, it can be equally as useful to admit there’s a lot about ourselves that we don’t know, either.

From my own testing experience, I can offer three rough ideas that you might find helpful for kick-starting the process:

First, when you’re surveying a new application, all of its components, and its infrastructure, note down when you feel least comfortable. What were you looking at when you first started feeling uncomfortable? Are there any common areas where you feel least comfortable? Are there any areas you find yourself impulsively ignoring or avoiding? All of these areas might be candidates for further investigation.

Second, when you are in morning standup or in feature design or sprint planning sessions, do you ever find yourself completely lost or afraid to admit to your devs that you don’t know what they’re talking about? This might be a good opportunity to take one of them aside afterward, and ask for some time to go over the technical details, or to ask for a good resource that you can pursue.

Third, and most important: Have you noticed things landing in user support tickets or bug reports that deal with aspects of the product that you didn’t realize you were unfamiliar with, or have been unable to test properly because of something you’ve been avoiding about it? This might be something you’ll put at the top of your “must learn” list.


It seems to me, that emotional evidence is an untapped area of exploration, in testing. This post focused entirely on fear, but there are probably other emotions that could be identified as useful signals of potential improvement in the way we work, or of potential sources of new knowledge, or quality insights. This post obviously isn’t a comprehensive look at fear. There’s a lot of room for discussion on this topic alone. But hopefully, this is enough to get people thinking about the subject.

Testers Are Experiment Designers

Chapter three of Lessons Learned In Software Testing begins by pulling us down from the cloud of philosophical abstraction we’ve been reveling in, to talk about actual testing practices. The authors take us from the ideal to the concrete by first providing a classification system of their own making, and walking us through nearly every known test technique, explaining how the technique fits into the system.

Testers Are Experiment Designers

While this chapter may first appear as though it is merely a mundane catalog of testing activities, what Bach, et. al. are really offering us (in keeping with the metaphor of science), is an intellectual toolbox from which to begin our own Experimental Design in software testing. Through the descriptions offered of each testing technique, the authors are conditioning us to think very carefully and systematically about what we actually want to do to test a software product, and why we would want to do it.

One approach to systematizing our testing, taken by the authors in chapter three, is the “Five Fold Testing System”. Given this framework as a basis, what we are really being tasked with is designing testing experiments that answer questions about the product under test, or attempt to falsify assertions about that product.

The “Five Fold System” is not a direct analog to the techniques and categories employed in the design of actual scientific experiments. Yet, if we look at the system’s five aspects specifically, we can see that they share many similarities with a scientific experiment:

  • Testers [People]: Who does the testing. For example, user testing is focused on testing by members of your target market, people who would normally use the product.

This might be understood as test subject selection. This is where a social scientist would be considering problems like demographics, sample size, environmental conditions, and so forth viagra sildenafil citrate.

  • Coverage: What gets tested. For example, in function testing, you test every function.

In the design phase of an experiment, a scientist might consider this the “scope” of his experiment. If I’m working on the chemical effects of Cannabis on the body, do I limit my experiment to only neurochemical effects, or do I include other physiological factors as well?

  • Potential problems: Why you’re testing (what risk you’re testing for). For example, testing for extreme value errors.

This third category might be thought of as the hypotheses themselves. What questions are being asked? What assertions are we trying to prove, or disprove?

  • Activities: How you test. For example: exploratory testing.

This category is about the “method” of the experiment. To borrow from the social sciences again, this might be about whether I’m going to rely entirely on self-reporting surveys, or entirely on neurological data, or a mix of both, or something else entirely.

  • Evaluation: How to tell whether the test passed or failed. For example, comparison to a known good result.

If you’ve read any published scientific papers, you’ll recognize this as the analysis and conclusion of a scientific experiment. What results did we get? What can we reasonably say about those results? What are the implications? What needs further study? One important feature of evaluation, in both science and testing, is the degree of reproducibility of your results. I’ll go into this topic more in the future.

Ultimately, the idea of this framework is to provide testers with a tool for “making better choices” about the testing techniques applied to various software testing problems:

Despite the ambiguities (and, to some degree, because of them), we find this classification system useful as an idea generator. By keeping all five dimensions in mind as you test, you might make better choices of combinations.

Better choices make for better test plans, better test plans make for better testing, and better testing makes for better software.

Bach, et. al. pack this chapter thick with specific details and examples, and focus intently on all the ways one could scrutinize a piece of software or its features. And, although it is admittedly not a comprehensive “how-to” guide, it does provide a solid path of further study for any motivated tester who reads the book.

I’ve decided to end my review of chapter three here, rather than to discuss each technique in detail. There is so much material to cover, it would fill at least one thick volume on the topic.

Yet, they deserve discussion. So, what I’m going to do is to turn chapter three into a to-do list for this blog. I’ll be discussing each technique outside the context of this book review, providing examples and context from my own testing experiences, and those of my colleagues.

Up next week: I’ll cover chapter four, in which the authors talk about another very concrete and very immediate topic of testing: bugs.

Teaching Behavior Driven Development Through Gherkin

I have championed of the concept of Behavior Driven Development for several years, spanning a few different employers, during my testing career. In that time, I’ve made a few discoveries about BDD, and the broader software development community’s relationship to it, that have really been bothering me, lately. So, I thought I’d take a moment while I’m on hiatus from Lessons Learned, to share my findings with you all.

The first discovery, and perhaps the most important, is that people don’t really understand what it is, or what it’s for. Most folks equate BDD with Cucumber itself, confusing a tool for a methodology. Most folks also equate Cucumber with acceptance testing, confusing collaborative design with test automation. And, as a result of all of this, BDD and its tools have largely been either ignored, or badly misused.

In the startup world, where I work, collaboration and negotiation are the single most important aspects of any software development project. Small development teams cannot afford not to stay connected with the needs of the business itself, and the people driving the project. In my view, this makes BDD, and its associated toolsets, an essential component for success, since it enables a clear and definite understanding between technical and non-technical team members about what promises the software is meant to keep.

But because Cucumber, Lettuce, and Behave are technical tools, and because they look a lot like test automation tools, and because nobody is teaching the methodology, many product managers and owners that I’ve known won’t even talk about BDD, let alone consider using something like Gherkin as a documentation tool.

I’ve had better luck approaching development team leads. Particularly in environments where product managers are not present (surprising, perhaps, but this is a commonplace reality in the startup world). Given the developer’s mental orientation toward technical implementation details, it’s a risky venture. But with a little patience and good coaching, it’s not too difficult to help the tech leads on a team see the benefit of the approach. Especially when they are being pressed to satisfy many of the demands put on product managers. Namely, negotiating and documenting feature design and requirement specification.

The first step of hooking them with the promise of test automation will get you in the door, but you’ll have to work quickly and persistently to re-orient the minds of the development team and the tech lead. Your task is to get them to put themselves in the skin of a user, rather than a developer (because the first thing he’ll want to know how is to specify SQL connection strings Gherkin).

To do this, I’ve been been tutoring my present tech lead in Behavioral Driven Development by insisting on two things:

First, that we absolutely must write out our scenarios during the planning and refinement sessions for any given feature or sprint. That’s been a challenge, to be sure. But it has yielded a number of positive results, both in getting the devs to think more laterally and strategically about the features they’re building, and in shrinking the amount of churn between the dev team and the CTO (who sets the product direction, presently).

And Secondly, I have imposed some conceptual and implementation constraints on the way that we use Gherkin to write our specifications. These “rules of thumb” are not wholly canonical with the vision of Gherkin, but I find them extremely powerful for getting devs to really understand what they’re being asked to do, and how they’re being asked to think.

  1. Firstly — and most importantly — Gherkin frameworks like Cucumber and Behave are not meant to function as substitutes for proper traditional unit, functional, or integration testing. if you’re using Gherkin to test input field validation, for example, then you’re testing the wrong stuff. If you want to test that input fields are properly empty when a dialog is displayed, you should be writing JavaScript (or php, or python, or ruby) unit tests. If you want to test that a dialog or light-box is displaying and dissolving correctly, you should be writing functional tests. If you want to test that the data in an input field is being stored properly in the database, you should be writing integration tests. But if you want to test that you’re keeping the promises you make to customers, then you should be writing Gherkin viagra deutsch. Or rather, You (the tester), your Product Manager, and your Tech Lead (i.e., the “Three Amigos”) should be working together to write it.

  2. Gherkin specs are not quite product specifications. Rather, they are a description of a user’s desire to accomplish a goal (what Product Managers might call a “user journey”). Our challenge, then, is to write down all the promises we make to end users about what they will be able to accomplish with this software. Which is a different task than writing down in detail, all the things that the software is capable of.
  3. Gherkin should not be used to “explain how” the user accomplishes his goal. Unless the test is about a specific step along a path, the specific steps are not needed. Gherkin is not user documentation. And Gherkin is not a set of reproduction steps. It is a document describing a contractual agreement with your business team. In short, Gherkin specs describe the “what” of the scenarios, not the “how”. What I tell developers specifically, is to beware of trying to embed the step definitions into the scenario assertions.
  4. Gherkin specs should not insist on there being one way to do something. In other words, Gherkin should not simply be telling the programmer what to program. I often joke with developers that they should not be using Gherkin as a “macro language for python”. They should be free to code up any solution they want in the application, and in the step definitions, that gets the Scenario to its stated goal. Gherkin should not care, as long as the behavioral end-goal is satisfied.

Ultimately, my goal is to get the team to change the way it relates to a software development project as a whole, and to change the way they think about software design. And I’ve seen some positive moves in that direction with my current team. We’ve even done a few feature planning sessions with complete specs already, that have yielded insights into the product that led to important improvements and changes.

I have an ulterior motive, as a tester. I want the product quality to be as good as I can get it. And if I can get the quality extremely good before a single line of code is ever even committed to the repository, then that makes my job a whole lot easier on the other end!

But, as with all projects — especially in an Agile environment — change and improvement is a gradual, incremental phenomenon. Time will tell whether my broader goal is achievable. But for now, I’m just satisfied to see the team writing clean, coherent, manageable Gherkin specifications.

On The Organizational Value Of Humility and Empathy

Every now and again, I try to do at least one thing that reminds me in a visceral way, just how much stuff I don’t know. I find no tool is more effective at this, than a coding challenge.

These days, most people bristle at the idea of a coding test, particularly as part of an interview process. There are some good reasons for this, but mostly I think people avoid these because It’s painful. It’s nausea inducing. It’s often embarrassing and sometimes leaves you sitting neck deep in a tattered mess of broken code and mixed emotions afterward.

But I’ve come to like these tests for precisely this reason. Not “like”, in the sense of liking Oreos or liking Star Wars. But “like”, in the sense of liking the occasional cold shower, or good hard slap. They sober you up. Coding tests are a giant wrecking ball crashing through the edifice of arrogance and complacency. They remind me that, no matter how smart I might think I am, there is still more to learn.

It’s important to be reminded regularly of what one doesn’t know. In addition to the reorienting value of a reality-check, It clearly points the way to new experiences and new challenges. Knowing what you don’t know, makes it possible to grow. And that makes all the difference for me.

The best coding challenges will teach you about what’s really important to software engineers, and will give you an opportunity to spend a little time in their skin, as it were. You learn what motivates them, where the corner-cutting impulses lurk, as well as what the worries about efficiency, architecture, and “clean code”, really mean pfizer viagra 100mg preis. It will give you a clear idea of what it means to write “buggy” code, the ways in which those bugs are most likely to occur, and how programmers try to deal with them. You’ll also learn about just how hard it is to build something both useful, and beautiful.

The implication I am leading up to, is that coding challenges provide an even more important lesson than humility. They offer an opportunity to grow your empathy for your fellow teammates. This lesson alone is worth far more, in my view, than even the coding skills themselves (although, those are important, and a subject for a different blog post). Empathy opens doors for communication, negotiation, and collaboration that were not available before. It changes the nature of your relationship to your teammates, and in doing so, ultimately improves the quality of the product by improving the quality of the lives of everyone around you.

This new found empathy reminds me that I don’t always know what is going on inside the heads of my teammates. And the humility of that realization inspires me to temper my relationships with curiosity and patience. And this begins a feedback loop of continuous improvement within my working relationships.

One important insight this feedback loop offers, is the knowledge of whether or not your teammates are willing to grow with you. This is vitally important for your own development. But more to the point here, it will tell you where the boundaries of quality are in the team, and in the product. These boundaries are the frontier you must press toward, if you are to have any lasting value in the organization.

The most important relationship you’ll ever have within any organization, is your relationship with yourself. By working to stretch your own inner boundaries, and being willing to bear the emotional challenges that comes with it, you’ll be leading by example and providing your team with a value that extends far beyond simple bug reports.