Forums

Hi,

I recently started writing my first quarterly review. As part of the process, I forwarded it to my manager for review and comment. I also asked him for a copy of a well written review that he might share. Looking through his files, he came up with an example of a "good" review.

I was suprised that the review was nine pages long. In my experience this seemed rather lengthy.

Note that the company is trying to streamline its process. The new "draft" form is 13 pages.

Note also, the company has a fairly established pattern of not delivering reviews on time. I think that the length may contribute to this.

To be fair, part of this form is a self assessment written by the employee. This part represents about 25% of the content.

So, my basic question is this -- how long are your written reviews?

Thanks,
Steve

By the way, here are the statistics for the sample my manager provided.

Pages -- 9
Paragraphs -- 143
Lines -- 742
Words -- 4974
Characters -- 23930
Sentences -- 207
Sentences per Paragraph -- 3.6
Words per Sentence -- 20.5
Characters per Word -- 4.6
Passive Sentences -- 7%
Flesch Reading Ease -- 55.9
Flesh-Kincaid Grade Level -- 10.5

tcomeau's picture
Training Badge

[quote="sklosky"]
So, my basic question is this -- how long are your written reviews?
[/quote]

Here are some numbers on one of my better efforts. (That is, it was a review I was really happy with, captured my guy's performance well, and specifically identified growth areas.) This is not the longest review I wrote this year, but it slightly longer than the median.

The total form (including the HR-supplied boilerplate and the goal statements) is 15 pages, 244 paragraphs, 5718 words.

The actual writing, including self-appraisal, is about 5 of those pages, 39 paragraphs, 1742 words. So maybe 1/3rd of the total form is appraisal, and maybe 1/4th is goals. The rest is instructions, rating criteria and weasel words. I mean, important HR notes.

The part I wrote (excluding his self-appraisal) is 3 pages, 22 paragraphs, 1098 words.

The math to get his self-appraisal is left as an exercise.

Just looking through real quick, the average review looks about a page (maybe 300-400 words of my writing) shorter.

tc>

tlhausmann's picture
Licensee BadgeTraining Badge

[quote="sklosky"]
So, my basic question is this -- how long are your written reviews?
[/quote]

About two pages of text authored by me spread out on a 5-6 page form.

In the casts Mark and Mike discuss the S.E.E.R. structure for writing a paragraph. (Summary, Example, Elaboration, Result) The writing style prescribes a tight 4-5 sentence paragraph structure. Annual reviews written in this manner incorporate BLUF and reinforce your main point. The result is your reviews are clear and concise.

Forum discussion: http://www.manager-tools.com/forums/viewtopic.php?t=1608&start=0&postday...
Casts:
http://www.manager-tools.com/2006/07/preparing-for-your-review-part-1-of-2/
http://www.manager-tools.com/2006/08/preparing-for-your-review-part-2-of-2/

tplummer's picture

Current reviews are about 5 pages including boilerplate. BTW: where the heck did the term boilerplate come from? Anyway, the company is moving to an extremely streamlined form which is only 1 page. I think we'll have a total of 1000 characters in the new form. The bad news: it's not much of a review. The good news: forces continual feedback (else you be completely ineffective) and it makes them extremely easy to write!

I've always thought the review should be short and to the point. Kind of like a powerpoint presentation. The message is through the commentary. So, the form contains the facts. Here are the 5 things you need to do. The actual review captures the why and how.

James Gutherson's picture

'Boiler plate' from what I was told comes from printing presses. Rather than setting standard text character by character, time and time again, the standard text would be one sheet of type that looked like a sheet of plate from a boiler.

tcomeau's picture
Training Badge

[quote="JimGutherson"]standard text would be one sheet of type that looked like a sheet of plate from a boiler.[/quote]

And the "permanent" text would be set on steel plates for durability, rather than soft lead.

tc>

Mark's picture
Admin Role Badge

One page.

Mark

AManagerTool's picture

One page?

How shall we do that when the "boilerplate" review form is 5 pages?

I smell another podcast....

terrih's picture

We have a 4-page form that consists mostly of rating the employee on a scale of 1-10 on various performance areas. Our HR department is on record as saying they don't like it and want to revise it, but they haven't yet.

Company hasn't mandated another round of reviews yet. I'm not quite sure how to handle that. The last time they mandated reviews, HR told us all to tell our directs that the reviews would NOT be tied to raises, and I did tell them, but that didn't stop people hoping. These hopes have been dashed.

Trying to figure out if I should do my own reviews or what. I know M&M say quarterly. I confess I haven't done one since the last company-mandated one last March.

ctomasi's picture

I was part of the team that wrote our current system four years ago so I know what goes in it. It's not even close to a one pager, like Mark recommends. I'd love it if it was!

Our system has four parts, each filled out by the employee, and the manager.

[list]Accomplishments
Development areas
Employee rating (38 factors)
Development plan - SMART goals[/list:u]

For most employees, there is also peer feedback attached in the form of a couple dozen (1-5 scale) ratings and comments at the end.

All of that comes to a packet of 5-7 pages depending how verbose the writers get. The thing that bothers me is that the core message is buried in that mess.

We're rumored to be migrating to a new system next year. I haven't heard anything about it yet.[/list]

WillDuke's picture
Training Badge

Not sure where I heard Mark talk about this, must have been briefly at the conference.

1 page.

divide page horizontally with a line.
Divide the top 1/2 with a vertical line.

So now you have an upper left quarter, an upper right quarter, and a bottom half.

Upper Left - WWW - What Went Well
Upper Right - TALA - Take a look at, or things to improve
Bottom 1/2 - Goals.

There's a fair amount of interpretation here, Mark threw something out really fast and I have extrapolated, so as a disclaimer, this isn't necessarily an MT form, but my interpretation of what it might be if I overheard something and extrapolated correctly. :)

Anyway, it's what I use now. Reviews are much easier.

ramiska's picture

I like that format, Will. Nice and clean.

I work for a division of a large multi-national (Fortune #64). The company decided to devise a common review system for all 100,000+ employees in the company. Our review system prints out to probably 7 or 8 pages, though much of it is boilerplate. Everything is computerized with comment boxes under each category: Business Objectives, Competencies, and Development Objectives. The objectives are SMART.

Ratings are essentially 1-5.

ccleveland's picture

My conference notes are almost exact from what Will posted.

One minor difference: the bottom half, I have Goals [u]for next year[/u]. It may be a small change, but it reduces the scope of goals a bit.

Mark also suggested that you use this format even if you have a "corporate required" 10+ pager. Just staple it to the top.

CC

WillDuke's picture
Training Badge

CC - didn't realize I was unclear on that. I absolutely agree. The bottom half of the page is about the future. Next year's goals.

I would like everyone's opinion on something. My partner thinks we need a numerical rating for each employee's performance. I suggested 1-10 scale:

1 - needs immediate termination
5 - Adequately does the job, neither fails nor exceeds expectations.
10 - Perfect employee, everything you could hope for.

What do you guys think of a small box in the upper right corner to hold the rating?
What would you think of using this rating system each week during O3s?

tplummer's picture

For me, a scale of 1 to 10 is too granular. I think I would go nuts trying to differentiate between a 7 and 8! Plus, if money is separate from the rating, then you can give 2 people the same rating but reward one a bit more effectively giving the distinction between a 7 and 8.

I couldn't give people a formal # each week. My numbers roll into my center which roll into my department which ... So, I can't guarantee some I rated as a 7 wouldn't be bumped to a 6 in the final rollup for the year. What I've done is to do a formal mid-year review (I believe Mark and Mike recommend quarterly :oops: ) and then give approximate ratings. We have a 5 point scale so I do 1-2, 2-3, 3, 3-4, 4-5. So, a 1-2 is a top performer who is pretty much on track for an excellent review. 2-3 is borderline good to average. 3 is solidly average. 3-4 is borderline needs improvement. and 4-5 is, well, very bad!

ramiska's picture

Our five-point system is actually verbal, rather than numerical. The rating is based on each goal, competency, etc. An average makes the total score.

That score is the beginning basis of pay increases. Increases are never mentioned in the review. They aren't known until months later.

Mark's picture
Admin Role Badge

One page is the standard, but it is rare.

On the other hand, what Chuck listed above is pretty good - top quartile of what I've heard and seen. Except for...THIRTY EIGHT factors? That is just hilarious. That's just some nerd in HR thinking they can do numerical analysis on it all..they never do, and the rankings become largely based on what the culture says is acceptable and what constitutes significant negative feedback.

Mark

tcomeau's picture
Training Badge

[quote="mahorstman"]... Except for...THIRTY EIGHT factors? That is just hilarious. That's just some nerd in HR thinking they can do numerical analysis on it all..they never do, and the rankings become largely based on what the culture says is acceptable and what constitutes significant negative feedback.
[/quote]

Wow, 38 is a lot. I guess I'd love to see the list.

We have a mere 18 "Uniform Performance Standards," seven of which apply only to managers. Each is rated 1-10, and the scorse count for 1/4th of your overall "grade".

The other 75% is based on accomplishments related to your goals, which are supposed to be SMART, but in fact aren't.

The overall grade is used to compute your raise, directly. The Directorate sets a target raise for people who "meet objectives" and computes a raise distribution based on the score dispersion.

"Good" managers figure out how to game the scores to get the raise distribution they want. Clueless managers don't, and then whine when they have to explain why everybody in their branch got 3.2%.

Not only does HR do some statistical analysis of scores across the entire Institute, our Division (which is now nearly a third of the Institute staff) does a very detailed analysis of the results. Unfortunately, we mostly learn we have the same problems (and strengths) as last year.

I would love to figure out how to make my guys' goals SMARTer, but I haven't had much luck. On the Hubble side, the basic direction we get from the Mission Office is "we think item x is going to break next, what can we do about it?" Figuring something out and putting together a plan is usually a six to nine month effort. For my own project on Webb, my next deliverable is in late July (a day after my birthday, in fact) and the Performance Period ends in June. So I won't actually "accomplish" anything during this review period.

Performance Review Goals in my branch focus on building competencies, and improving the process, rather than on specific accomplishments. I do think my life would be easier if I were solving ordinary problems.

tc>

ramiska's picture

[quote]Wow, 38 is a lot. I guess I'd love to see the list. [/quote]

Wait till end of 2008. I might be able to show that to you. oye!

sklosky's picture

All,

Thanks to all for responding to my question. The responses provide great insight into this area.

I delivered the quarterly review to my employee yesterday. The report (prep, writing, delivery, post-action) went well. The result was "good". I'm looking forward to the next reviews. My goal for these reports is "better" and "best". :)

Have a great holiday season.

Regards,
Steve

ccleveland's picture

[quote="tcomeau"]...putting together a plan is usually a six to nine month effort. ...my next deliverable is in late July ... and the Performance Period ends in June. So I won't actually "accomplish" anything during this review period.[/quote]

I'm no rocket scientist, :), yet don't you have more granular deliverables or ways of tracking progress in chunks smaller than >6 months? I thought it [u]was[/u] rocket scientists (NASA) that helped develop "Earned Value " project calculations.

CC

tcomeau's picture
Training Badge

[quote="ccleveland"]
I'm no rocket scientist, :), yet don't you have more granular deliverables or ways of tracking progress in chunks smaller than >6 months? I thought it [u]was[/u] rocket scientists (NASA) that helped develop "Earned Value " project calculations.
[/quote]

I'm not a rocket scientist either, but I know several. A standing joke around here is "It's not rocket science, which is too bad, because we're good at that."

EVM was developed by the Department of Defense, though it was adopted by NASA. EVM requires that you know what things cost, in addition to what they are worth. NASA is good at many things, but keeping track of costs is not one of them. I think it's been most of a decade since they had an unqualified audit.

My project has deliverables about every eight or nine months for the next 4 years, then we have a little flurry of deliverables just around launch (in 2013 or so). While I have some internal milestones for my own use - in fact, I have one January 8th that I really should be working on right now - they are not deliverables. Mine is a pretty small, fairly routine project.

One of my guys was recently asked to "find a good model for Bay 8 heating" on the Hubble Space Telescope. We'd like to have a basic idea of whether we can predict heating in that bay before the Servicing Mission in August. I'm not convinced that the objective is achievable, but if it is, it will be very helpful in extending the life of the observatory, perhaps by as much as a year. He can spend half his time on that ill-defined objective, but I have no good idea how to write a SMART goal for it. It's entirely possible that we can't model Bay 8 thermal in a usable way, because we don't know enough about the spacecraft, the environment, the heat output of the sun, or a host of other factors.

So I focus on whether he is asking good questions of the Thermal people, whether his reports on the options are clear and readable, and whether he's following a good process for validating possible models. I don't even pretend to evaluate the math. I know what he's doing for the next few weeks (including the week he's spending in Kansas with his family, driving both ways in his Subaru) but beyond that neither of us has any clear idea.

tc>

ccleveland's picture

Tom,

How about this:

Goal: By May 1, 2008, develop feasibility analysis of Bay 8 heating model [u]and[/u] review with client.

Is it measurable? Yes, either the analysis and client review for feasibility of the model is complete or it is not. Is it timely? Yes, it must be completed by a certain date.

Perhaps there are other ways of breaking down a [u]large[/u] chunk of [u]uncertain[/u] work into more manageable bits. I merely suggested EVM because that is one common way of tracking big deliverables. You don’t have to have cost in dollars, you could use other measures as well: expected time to complete vs. plan, number of hours worked vs. plan, etc.

Also, I meant a “deliverable” to be any definable part/component of a project. You may be able to break [u]your[/u] 6-8 month deliverable down into smaller work components (like a mini-project plan) that can be completed within a manageable timeframe.

CC

tcomeau's picture
Training Badge

[quote="ccleveland"]Tom,

How about this:

Goal: By May 1, 2008, develop feasibility analysis of Bay 8 heating model [u]and[/u] review with client.
[/quote]

I understand the impulse, but it's not a good goal, because it's not a good deliverable. Rodger doesn't want a feasibility analysis, he wants a model, or an explanation of why we can't get there. In fact, what he really wants is a model and error bars, and some comments on whether the error bars are acceptable. We won't really know if it's feasible until it's either working well enough to use, or we get to the point that nobody has any useful ideas for how to make the model better.

Merle has already gotten within three degrees, which is not quite good enough, but is worth doing as a baseline. The residual problem is quite likely not achievable, but it's worth spending half his time between now and August looking for a way to reduce the errors, because if we had a good model it would tell us a lot about our ability to do science next December, when the Earth gets closest to the Sun again.

I don't want to write a goal that says "build a model that predicts to 1.5 degrees" because it may not be achievable with what we know about the observatory. I don't want to write a goal that says "build a model or explain why it can't be built by August 7, 2008 at 08:41 EDT" because I really don't want a detailed explanation, though we will no doubt be quizzed about the problems with the model if it doesn't work out. And I really want Merle focused on the model, not on covering his... retreat if the model isn't working out.

Besides, I don't really believe the Shuttle Program will make that date. :) I'll believe it at T-31 seconds.

[quote="ccleveland"]
.... you could use other measures as well: expected time to complete vs. plan, number of hours worked vs. plan, etc.

Also, I meant a “deliverable” to be any definable part/component of a project. You may be able to break [u]your[/u] 6-8 month deliverable down into smaller work components (like a mini-project plan) that can be completed within a manageable timeframe.
[/quote]

I do have intermediate milestones. The problems with those is you have to trust me that they are done. One of my milestones is Monday, when I should have all the interfaces identified for the various modes. (Flight, I&T, simulation and modeling.) There's no artifact that goes with those interfaces beyond some scrawls on my whiteboard, though if you stop by I'd be happy to talk you through them, why they're needed, etc. The only artifact I get that anybody can externally review is the Design Document in July.

As for the hours-to-plan, that's one of the screwiest parts of the way our contract works.

We have a plan defined in FTEs, and we're not allowed to under or over charge. We have to be within 3% of the plan on a quarterly basis. We get penalized for overspending, and for underspending. So if I get ahead of schedule, I'm expected to invent some more work to fill the hours planned.

The thing that really matters, and the thing we really want to accomplish, is to get as much science out of the observatory as we can, with the resources we have assigned. We guess (and sometimes we argue) about what "best science" means.

For Hubble, this usually means a decision I make today will show up in science productivity in one to two years. For Webb, the earliest we'll really know is 2014.

We have no good metrics in the short term.

tc>

eagerApprentice's picture

[quote="WillDuke"]Not sure where I heard Mark talk about this, must have been briefly at the conference.[/quote]

I could have sworn I heard about how to make annual reviews from a MT podcast - but maybe it was another podcast...

Either way, they said 1 page too (whoever it was).

Still, I'm not surprised to see some of these reviews are so long.

When I was in Taiwan, I taught English to Taiwanese managers who often asked me help write there reviews to their directs.

Even though they had to use English and struggled with it, they still wrote pages of review!

The hardest part of those lessons was teaching them what to say, how to say it, and when to stop writing.

bffranklin's picture
Training Badge

Tom,

It looks to me like part of the problem is that you're trying to shoehorn goals around the core components of doing the job. I worked as a security analyst for a couple of years (basically, my job was to catch hackers), and ran into many difficulties with goals. The core of my job was to keep the client safe, but with information security (and this is certainly an indictment of the current state of infosec), there are very few mechanisms for actually quantifying "safe".

It sounds like your hands are tied in similar ways with the requirements of your contract, so I'd go back to the entire concept of "best science." It sounds like a lot of your work is tied up in getting things to either implementation or the bin ASAP. Can you improve the number of projects you're getting through the pipeline? Can you improve the per-job time? Looking at things on a "meta" level may show some indirect things that you _can_ focus on to gain some tangible benefits.

US41's picture

[quote="tcomeau"]I don't want to write a goal that says "build a model that predicts to 1.5 degrees" because it may not be achievable[/quote]

There is no need for goals to be achievable.

Imagine a football team. It's 4th and 10, they are on their own 20 yard line, and the coach does not send in the punting team. You're the quarterback. Are you going to tell your team that their goal is what you think is achievable, or are you going to tell them that you are going to not only get a 1st down but also score and win the game?

If you tell your team that their goal is just to get a 1st down, they will set their sights only that high.

If you tell your team that their goal is the moon, they will plow through to the next level.

The human mind has this strange way of taking goals and then attempting to get 80% of them.

Examples:

* Runners have to be trained to aim beyond the finish line. Amateur sprinters will start slowing down five steps before the finish line.

* Martial artists have to be taught to aim beyond a board to break it. Otherwise they start to slow down before the board and hurt their hands.

* Football kickers have to be taught to aim past the goal posts into the stands behind them, otherwise their kicks tend to fall short.

Another reason to not worry about goals being achievable is opportunity cost. If you set a goal of getting a 1st down, then you will miss out on the opportunity to actually score and win the game.

If you set a goal that is "achievable", you are throwing away any chance of getting more than that.

People go to the finish line they perceive, slow down, and then stop at it.

Place your finish line beyond what they can do and see how far they go.

Goals need to be measurable and time-based. That's all you have to worry about.

Examples:

* Reduce expenses by 20% by End of Q1. (You might think a 1% reduction is all that is possible, but they might achieve 5%. You might think it would take all year to do this - but what if they actually do it this fast or even in six months?)

* Improve efficiency enough to eliminate two full-time positions by End of 2Q.

* Develop a program of training and reduce errors by 25% by EO2Q.

Think about this. What is the goal of the space program? Do you honestly believe that by putting telescopes in space or flying orbital missions around the Earth that it is possible for mankind to develop the technology and wisdom necessary to survive his own violent nature and escape this planet to colonize the galaxy using faster-than-light travel that is safe, affordable, and popular?

This does not seem like an achievable goal. Seems like a measurable, time-based goal - not a SMART goal.

A SMART GOAL for the space program would read like this:

* Develop a space vehicle that can orbit the Earth and deliver repair crews to existing satellites within the next 100 years.

* Send a man on a one-way mission to Mars who will probably die before he gets there due to psychological stress or micrometeorites puncturing his life support systems and craft hull.

SMART goals are for politicians. Here's an example:

* Reduce the rate of greenhouse gas emissions increases by 1% by 2100 (thus leaving us increasing those gasses and having no positive effect)

SMART goals stink. They don't achieve anything because they are too achievable. Instead of setting the bar high and driving everyone to achieve, SMART goals set the bar low and drive everyone to give up and go home.

What you want are Manager Tools goals - MT goals. MT goals = Measurable, Time-Based goals.

Set the bar high, and see them perform.

I implemented MT goals in my department. Performance improved 30% over three months.

You work in rocket science, so help your folks reach for the stars, not just the moon.

tcomeau's picture
Training Badge

[quote="bffranklin"]... there are very few mechanisms for actually quantifying "safe".
[/quote]

That's a pretty good analogy, actually. For Merle's project, the quality goal would be to match the behavior of the Observatory within something like a tenth of a degree. There isn't an "oracle" (to use the software testing term) I can go to in order to figure out the correctness of the model. I can tell you next year right about now if he got it right, unless the New Outer Blanket Layer gets installed in August, and changes the behavior of the observatory.

[quote="bffranklin"]
...go back to the entire concept of "best science." It sounds like a lot of your work is tied up in getting things to either implementation or the bin ASAP. Can you improve the number of projects you're getting through the pipeline? Can you improve the per-job time?
[/quote]

It's best science, not most efficient science. We have an efficiency metric for the whole observatory, and dozens of people contribute to improving that metric in ways that are difficult or impossible to untangle. (And just for amusement value, our current numbers are roughly twice the pre-launch NASA goal.)

For Merle, I don't want him to do more jobs, or do them faster or cheaper. Those are efficiency metrics, and we actually have gotten penalized for being more efficient. I want him to do better. I want the model to be as close as possible to the Observatory, but I won't know if he's successful for a year or two, when we see how the Observatory behaves.

The notion of MT goals is attractive, but I can't get there.

It just occurred to me that there is an echo of Heisenberg here: If I can Measure a goal, I can't set an appropriate Time for it; if I can set a good Time, I can't find a good Metric.

tc>

bffranklin's picture
Training Badge

Tom,

Okay, well, extending the security analogy, the optimal way to test a group of analysts is to have a "red team" attack the client's network, and measure how long it takes the analyst group to detect. You can tweak this by providing different knowledge levels to the red team -- you can simulate 0-knowledge attacks by not telling the red team anything; you can simulate disgruntled IT employees by telling the red team everything.

Could you run previously solved problems by your team during the periods where you need to invent work? Take something you got right in the past (or wrong, for that matter), and run it by some team members that haven't seen it before?

Admittedly, these do sound like they're bordering on efficiency metrics, still. That said, it's looking like a lot of the problem is tied up in the word "best". What are the qualities of best science? Which of those can have metrics attached to them? Would you be able to have better goals if everything went from conception to production in a year? What would those goals be without the cap of a year on the time constraint?

tcomeau's picture
Training Badge

[quote="US41"][quote="tcomeau"]I don't want to write a goal that says "build a model that predicts to 1.5 degrees" because it may not be achievable[/quote]

There is no need for goals to be achievable.
...

What you want are Manager Tools goals - MT goals. MT goals = Measurable, Time-Based goals.

[/quote]

Again, I understand the notion, and the motivation, for setting goals that will probably not be achievable. It would be unethical for me to do that.

The immediate problem is that raises are set by formula, based on your evaluation, which is in turn based on meeting performance appraisal goals. I don't get a "bucket" of money I can dole out as I see fit. The total pool and the formula are set by the Directorate, and I have very little wiggle room. If I don't set goals my guys can achieve, they pay for it. I'm not willing to do that.

A SMART goal for space exploration would look something like this:
[quote="John F. Kennedy"]
I believe that this nation should commit itself to achieving the goal, before this decade is out, of landing a man on the moon and returning him safely to the earth. No single space project in this period will be more impressive to mankind, or more important for the long-range exploration of space; and none will be so difficult or expensive to accomplish.
[/quote]
That goal was achieved, ahead of schedule. The Nixon administration was unwilling to commit to new goals and the resources to achieve them, with the results you see today.

I also know what can happen when you set unrealistic goals and provide inadequate resources. The result of "Node 2 Delivery and Station Core Complete by February 19, 2004" was seven dead astronauts, a lost Orbiter, higher costs and fewer capabilities for the station, the evaporation of half a billion dollars for space science and dozens of lost opportunities.

I also know that setting reasonable goals does not, necessarily, result in lower expectations or reduced effort. The goal for the MER program was to operate two rovers for 90 days within 100 meters of the landing site. Four years later, Spirit has traveled over 7500 m, Opportunity is approaching 12 kilometers.

I'm not sure what goals the space program, overall, should have. It would be unethical for me to suggest that FTL travel is even possible. I do believe that as we learn more about the nature and origins of the Universe we enrich the species, whether it has immediate practical applications or not: That knowledge is an objective good worth pursuing. It is not, however, an "ordinary business problem." It is emphatically non-economic.

I use the Manager Tools techniques because they work for me. I use them even when I don't like them. (I [u]really[/u] don't like handshakes. I do them the MT way because I can see a positive effect, but it's a conscious, uncomfortable behavior.) I focus on behavior because I can affect behavior and not care about "attitude."

This one isn't working, and as I indicated in the previous post, I think it's because of this Goal Uncertainty Principle.

tc>