Jump to content

Solving The Gripper Rating Problem


Jedd Johnson

Recommended Posts

In the Metroplex Mayhem at Metroflex 3 Thread, I inadvertently hijacked it by bringing up the difference between ratings that the #4 Gripper has received between Chad Woodall's set-up and Eric Milfeld's. Real sorry everybody - tht's my bad.

Mighty Joe then posted this in response to my question:

A while back I mentioned the process concerning gripper calibrations and angered many folks,

got a thread locked, and was accused (off line) of being a "know-it-all. With that in mind

I shall walk cautiously here.

1) Why don't all those that calibrate grippers do a video on the exact process they are

using to calibrate grippers.

2) If you have 4 individuals calibrating the same gripper and all 4 come up with different numbers (sometimes drastically)

something is wrong. The only way to troubleshoot the problem is to first identify the problem

and one way to identify the problem(s) is for the players involved to video their procedures.

3) One huge factor is simply adding weight till the handles touch. This will result in an incorrect amount

everytime. The reason is because of a torsion springs nature to constantly rebound back to its resting position.

This is solved by pulling down on the gripper handle where they touch after you have added weight. For example,

if you're calibrating a #3 gripper and your beginning weight is 145 lbs. and the handles lack an 1/8" from touching,

you can slightly pull down on the top handle to allow the handles to touch and it may be a matter of 1/32" now which

could mean adding only a pound or two whereas if you didn't do this it may take 5-10 lbs. I've seen this time and again

at Eric's. Bottom line is if you're simply adding weight till the handles touch, your calibration is wrong.

4) Another factor is width of straps and placement of said strap on top hanlde when calibrating.

Simple physics tells us that force applied to a given area spreads out and is distributed over given area

based on the dimensions of objects under measure. The next factor is leverage applied to the top handle based on

placement of said straps.

5) Another factor is calibrating plates. What scales are being used and from what source? We are all

assuming people are using weight plates being weighed at the post office. I will not say names but I personally

know of instances where this was not the case simply because the individual didn't think it made that big

of a difference. Wrong answer!

I would mention more of a more technical nature but it's not necessary to make my point(s).

This is why all the current charts going up all over the place could be and probably are flawed.

One individual went as far as saying calibration can never be precise. I strongly disagree! The reason

is because if you can come up with standards for measurements and accurately quantify something, YES! gripper

calibration can become very precise.

I propose the following:

ALL major players currently calibrating grippers choose one gripper and each individual calibrate

that gripper and then send it to the next calibrator without one another knowing what calibration was

reached (I can explain how this can be done if agreed). After each individual has calibrated the same gripper,

then and only then we will reveal the numbers. ALL individuals involved in this experiment MUST video the calibration of the TEST gripper before it is sent to the next calibrator. More details if there's interest.

My final thoughts on calibrating grippers is this: No one has a right to complain if no one can agree

on the procedures and processes for calibrating grippers. Just my thoughts.

One last thing. I will pay for ALL shipping costs of this experiment with TEST gripper to and

from individuals doing the calibrating. Also, being I'm not an expert in the physics involved I have

an individual that is (name and credentials revealed after experiment)who is quite willing to examine the videos

submitted and contribute expert opinion on the procedures/processes.

If this is not trying to contribute to this issue

in a positive and professional manner then I don't know what is.

Let's quit talking and let's solve the problem.

Thanks for your time!

This is a definite issue we need to try to fix. Joe has been kind enough to cover the charges for shipping, as bolded in the quoted section.

I think we need to take Joe up on this offer and see if we can figure out what is going on here. We have to try to tighten up our systems as much as we can.

I will also step forward and donate the TEST GRIPPER for the cause. I will send it to the first Gripper Rating Location.

Who of you is willing to take the time to do this? I know Chris Rice, Aaron Corcorran, and Eric Milfeld come to mind right away as names who have Rated a LOT of Grippers used in contests. Are there more that I am forgetting? Maybe Ben Edwards?

Not to volunteer you guys to do work, but I do hope that you will help. I will even use the Gripper Rating Device that I have prior to sending the Gripper out.

Let's try to tackle this.

Who's in?

Jedd

Link to comment
Share on other sites

I'd like to get in on this and test my setup especially if I'm going to be doing contests with my grippers up here.

Link to comment
Share on other sites

Great idea! I'm out however, because I don't have a power rack anymore for my gripper calibrator. And I'm looking to sell it anyway to someone who is able to get their plates calibrated (weighed) in a post office that will let them do that.

Link to comment
Share on other sites

I think I may have mentioned this in a previous discussion but I want to add that you may want to consider showing reproducibility at each location for each gripper tested. Multiple results will indicate precision within a location and perhaps help point to any potential problems there or show that a problem exists which is independent of a particular location (location meaning a specific combination of apparatus, weights, calibrator, technique, environment, etc.).

For example, three calibrators in three separate locations get results of 150, 160, and 162 lbs. It looks like the 150 is an outlier. However, multiple results at each location might give results like: (1) 150, 152, 152, (2) 160, 155, 156, (3) 162, 155, 170. You now have averages of 151, 157, and 162 (n=9 ave: 157, sd of 6.2) with standard deviations of 1.2, 2.6 and 7.5.** Not only does data set (1) look better in terms of that location's ability to reproduce data (but may or may not be accurate), but the averages are indicating a persistent accuracy problem since there's a lot of scatter both within sets (2) and (3) as well as between all three sets. [Please keep in mind this is made-up data to make the point of the usefulness of multiple determinations.] **Hopefully all is calc'd right.

Btw, has anyone checked with the post office to see what their calibration (or standardization) procedure is? It might help determine what can and can't be assumed regarding their scales. (If not, I'll do it if for no other reason than curiosity!)

Might as well list my suggestions (post is too long!):

1 - At least three determinations done by each location on at least one specified standard gripper (if not more).

2 - Check extremes of gripper strengths to ensure the effective operational range (if more than one gripper).

3 - Consider sending a video of techniques before sending gripper(s). There may be some differences that the calibrators will want to test out on the standard gripper(s) but perhaps there are other differences that can be worked out and proceduralized beforehand.

4 - Consider environmental effects if only as secondary causes. Record ambient temps and any other conditions (humidity, different days, whatever) for later discussion; at least you'll have it if you need it.

Use any or none of this post; I'm just throwing it out into the ether. This project will make for some great trouble-shooting - it will be very interesting to follow its progression. :) Best of luck!

Link to comment
Share on other sites

Hold up!

This is GREAT that someone (Jedd) agrees that we need to fix this issue

BUT if I'm paying for the shipping (and I'll pay every dime) it's going to be done precisely

and very well planned. Xengym has made some good points here.

Many steps must be taken and many questions answered before any gripper

is calibrated. I will start on a list this week and start planning the whole process.

I will supply the TEST gripper(s).

IMPORTANT POINT: If this is not done correctly it could be flawed and biased.

You'll see what I mean when I list some questions and an outline of the processes

and procedures.

Long ways to go but I'm thrilled that we are taking steps to solve what I call a major problem

in gripper calibration.

Jedd is correct in asking who is going to be the TESTERS for the experiments. Those volunteers should

PM me for precise instructions concerning shipping, etc. I will need mailing addresses of those volunteers.

If the volunteers are not willing to video their process they cannot be part of the experiment. They MUST also agree

to various other procedures that I will PM them beforehand. NO results will be posted on the GB until AFTER ALL

testing is completed and data gathered. No exceptions!

I need to know WHO is volunteering for the calibrations. It would be nice if it's the individuals

that calibrate the majority of the grippers today. PM me if you're up to this experiment.

Thanks!

Link to comment
Share on other sites

Before we go any further - I think we need to see if all the Gripper Calibrators are at least made to the same specs. If we aren't all using the design and measurements that Dave and Greg came up with (or at least identical designs) - then all the rest of this is irrelevant.

Link to comment
Share on other sites

First, I think it would be important to get Matti from Finland involved as well. I'm not sure anyone has rated more grippers than him.

I have also rated countless grippers, and spent a great deal of time and energy on the process. I would like to participate in the experiment as well. I will work on a video of my process, which I have been meaning to do anyway.

From reports of what Eric does, my process sounds very similar to his. I have also, by matter of chance, cross-rated grippers with Eric. One example is an Elite that I borrowed from Greg Griffin. Eric had rated at 164# and I got the same number. Another example is a #4 that I sold to Paul Knight, which Eric subsequently calibrated and in that case our number was different by a few pounds.

This brings up the issue of margin of error. I have always assumed there is at least 2.5# margin of error in the process. I firmly believe from the experimenting and testing I have done that it is not possible to get the exact same number every single time.

I wanted to get a few things out there which I feel need to be clarified for the testing:

1) Oiling the gripper. I think this is the biggest factor in the margin of error. I have had over 10# spreads in ratings on the same gripper based on how recently it was oiled. Maybe more on heavy grippers that were really creaky.

2) Rounding. I personally drop the decimal from my ratings. 152.63 becomes 152. I think it's foolish to belive we can be this accurate, and it doesn't make any sense to round up. If it didn't make it to the next pound, which I consider the minimum increment, then it didn't make it.

3) Strap. I think everyone should use the exact same strap if possible. I don't believe it's enough to just try and get a 1" strap. There are different thicknesses, different materials, different lengths, and I have a hunch this all adds up and affects the margin of error. We could all agree on a readily available strap at Home Depot or something. Also, how have others made a loop? I just tied a knot. My RGC is not very high off the ground so I might have a smaller loop. Does this matter? I'm not sure, but my RGC used to be taller when mounted to an old bench and my numbers did change before and after. Not by much, but in general they are lower now. I also improved my process otherwise, so I can't say for sure it was directly related to the strap. But it could be a factor.

4) Spreader. I use a short piece of PVC so that there are no corners pressing into the strap. I realized when using a wooden block that it makes pressure points on the strap and can actually create slack as the strap goes over the handle. For example, while the weight is hanging, there is a pucker on the front side of the strap, but not the back. To me this means that the weight is not properly pulling on the end of the handle and the force is not evenly spread over the 1" thickness of the strap. I have found that the round PVC allows the strap to self-correct and the "pucker" is not as big of an issue.

These are the things I could think of off the bat. I will work on making a process video and will check back later when I have more time to collect my thoughts. :D

Link to comment
Share on other sites

Joe, sorry if I made it sound like we had to start this over the weekend. Obviously we need to have it all planned out. That is what this thread is for.

Can I ask a question quick? Is this all because Eric's equipment is not build the same way as say, Chris Rice's? I believe most people who rate grippers do so with a device that is attached to a squat cage. Isn't Eric's hung on piece of lumber attached to the wall in his garage? I stood right beside it two years ago and can't even remember for sure. Can this design have the effect of rating a gripper 5 to 7 lbs lower?

I just want to find out how to simplify and control the matter of gripper ratings. If it is because Eric's system is different puts out a lower setting compared to everyone else's, then the problem is Eric's set-up, is it not?

Thoughts?

Link to comment
Share on other sites

Joe, sorry if I made it sound like we had to start this over the weekend. Obviously we need to have it all planned out. That is what this thread is for.

Can I ask a question quick? Is this all because Eric's equipment is not build the same way as say, Chris Rice's? I believe most people who rate grippers do so with a device that is attached to a squat cage. Isn't Eric's hung on piece of lumber attached to the wall in his garage? I stood right beside it two years ago and can't even remember for sure. Can this design have the effect of rating a gripper 5 to 7 lbs lower?

I just want to find out how to simplify and control the matter of gripper ratings. If it is because Eric's system is different puts out a lower setting compared to everyone else's, then the problem is Eric's set-up, is it not?

Thoughts?

Mounting of the unit could affect margin of error. If the unit has any give for example, or will flex under heavy weight, that will affect the number. I know mine does flex a little under 200+ pounds.

I don't think it's just Eric's set up. I know it's just one example, but remember the #4 he calibrated after me. His number was higher than mine in that case.

One issue that I touched on above is that I'm not sure others acknowledge a margin of error, no matter the setup. If Eric's calibration was only different by 5lbs on a #4, I'm not sure there is a problem. My testing has shown that 5lbs might be an acceptable margin of error for what we can hope for with a gripper that heavy. I might be alone in this, but I do not expect to get the exact same number every single time. Sometimes I do. But if I calibrate the same gripper more than a couple of times, I inevitably get a different number

We might need to consider each taking three runs at each gripper and averaging the results. That might be appropriate.

Link to comment
Share on other sites

There are too many unknowns right now. The issue seems to be Eric’s numbers. So what is different about his equipment (if anything) – his methodology (if anything) – his weights (if anything) etc? Are all the devices level the whole test – is the tube that holds the gripper the same length for everyone – the strap the same – the strap spreader etc etc. Are all the grippers cleaned and freshly oiled (and I suppose with the same oil)? I know I didn’t do that in the beginning and neither did anyone else back when we started.

Before we go any further (or at least before I go any further) – lets get the specs for the actual device decided. I built mine from Dave and Greg’s specs – I built the second RGC ever built and Greg and I sent grippers back and forth to check out our results. The only way this will ever be resolved permanently and to everyone’s satisfaction is to have one set of competition grippers that have been all done with the same device and by the same person (or maybe committee of people) doing them – then and only then will “everyone” be satisfied. As it is everyone wants to believe they are the “expert” and best qualified to say how it’s to be done.

Link to comment
Share on other sites

There are a LOT of unknowns right now - no conclusions should be drawn at all to avoid missing a piece of the puzzle. Eric's set up looks on the surface like the odd one out but like Matt said, it may very well not be. Won't know until we get some controlled data. I did method development and validations in my job - the process always begins with the apparatus you're using as the 'detector'. Here it's the RGC. Dimensions, heights, materials, etc should be considered - some aspects may affect numbers, some won't. Surely strap material could be taken from the same reel and shipped around for the test? This isn't a validation yet - its still in developmentl; some preliminary work should be done but then you need a starting point with real data too.

I think all the calibrators have done the best job they could knowing that the ratings they put on these grippers will be used from then on. And, it's not like they are getting paid for this! lol I suspect all of them will end up changing something if only to help standardize the equipment and/or techniques for the good of the community. Kudos to those willing to jump in and put their calibrations under the glass - it's not easy when they have worked hard to get to this point. Yet, consider the alternative: one person doing all the 'official' calibrations from now on. Surely, no one wants to sign up for that.

@Cannon, of course you're right, there's always a margin of error; it's just a matter of how big. I'd perform some tests in the lab that would be easily reproducible to less than 0.5% s.d. of the average value (i.e., all test values fell within 99.5-100.5 for a known value of 100). Another test, by its very nature, would be hard-pressed to give 20% s.d. no matter how hard the analyst tried to eliminate error. Eliminating systematic errors like bias, temp differences, oiling, weight differences (as in plate weights) will help bring it down but there will always be some. I guess y'all will have to decide what is acceptable when you get past the main contributors to that error (meaning aspects of the process). It's in discussion right now exactly because the consensus is that the margin is currently too big.

This is a great brainstorming session. I put in my two cents so that Joe, and whomever else, will have as much to work with the community can give. He may not use it all or be able to (testing a lot of variables can get unwieldy fast), but at least it's been mentioned. I do agree with him that how the test is finally set up and controlled can make or break the value of the data. I think y'all could get some very valuable information from this.

Link to comment
Share on other sites

GREAT posts here!!!

Jedd, no problem at all brother. I know you didn't mean it that way.

Cannon, you have some valuable input here also. Thanks!

Xengym and Climber are absolutely correct! There is simply too many

unknowns and no real data as of yet. There is already several individuals at work

on these unknowns (they will not be mentioned or named until all data is gathered).

Hopefully this is understandable for GB members following this thread.

I'm currently working on torsion spring behavior with some help of an expert.

Sources for scales of certain accuracy are being checked also. Calibrator dimensions

and design is being researched. The main thing is to come up with and agree to a set of standards

that are verifiable and can be reproduced.

I will have more to add later.

I appreciate ALL the input here. Thank you very much!!!

I'm compiling a list of variables that I would like GB member input on concerning such.

I should have the list done before weeks end.

Thanks again!!!

Link to comment
Share on other sites

For example, three calibrators in three separate locations get results of 150, 160, and 162 lbs. It looks like the 150 is an outlier. However, multiple results at each location might give results like: (1) 150, 152, 152, (2) 160, 155, 156, (3) 162, 155, 170. You now have averages of 151, 157, and 162 (n=9 ave: 157, sd of 6.2) with standard deviations of 1.2, 2.6 and 7.5.** Not only does data set (1) look better in terms of that location's ability to reproduce data (but may or may not be accurate), but the averages are indicating a persistent accuracy problem since there's a lot of scatter both within sets (2) and (3) as well as between all three sets. [Please keep in mind this is made-up data to make the point of the usefulness of multiple determinations.] **Hopefully all is calc'd right.

You've got the right idea with this, but you can take it even further. You can use analysis of variance and run an F test to decide whether or not these three data sets can be deemed independent (within a certain level of probability). Independence between data sets for the same gripper would indicate the tests are NOT being taken equally among locations. Going even FURTHER, use a Scheffe-Post Hoc test, you can actually determine which individual location or locations are independent of the rest of the group. HOWEVER, in order to use this method with reasonably accurate results, there would need to be at least 10 measurements taken at each location.

I never thought studying to be a mathematician would help me on the gripboard :upsidedwn .

Edited by thewalrus
Link to comment
Share on other sites

For example, three calibrators in three separate locations get results of 150, 160, and 162 lbs. It looks like the 150 is an outlier. However, multiple results at each location might give results like: (1) 150, 152, 152, (2) 160, 155, 156, (3) 162, 155, 170. You now have averages of 151, 157, and 162 (n=9 ave: 157, sd of 6.2) with standard deviations of 1.2, 2.6 and 7.5.** Not only does data set (1) look better in terms of that location's ability to reproduce data (but may or may not be accurate), but the averages are indicating a persistent accuracy problem since there's a lot of scatter both within sets (2) and (3) as well as between all three sets. [Please keep in mind this is made-up data to make the point of the usefulness of multiple determinations.] **Hopefully all is calc'd right.

You've got the right idea with this, but you can take it even further. You can use analysis of variance and run an F test to decide whether or not these three data sets can be deemed independent (within a certain level of probability). Independence between data sets for the same gripper would indicate the tests are NOT being taken equally among locations. Going even FURTHER, use a Scheffe-Post Hoc test, you can actually determine which individual location or locations are independent of the rest of the group. HOWEVER, in order to use this method with reasonably accurate results, there would need to be at least 10 measurements taken at each location.

I never thought studying to be a mathematician would help me on the gripboard :upsidedwn .

Oh no, a math nerd! lol You're right - there are lots of statistical tools out there but, as you said, many tend to work best with larger data sets. I don't think calibrators will be wanting to do 10 time-consuming measurements on a 200lb gripper out of love of math purity (well, some might lol). (Although, ANOVA and F-tests work with as few as 3, if memory serves.)

I was suggesting that a single measurement could really be misleading and was only hoping that 2-3 would be considered. :)

Link to comment
Share on other sites

10 measurements is probably overkill for our purposes but it would allow for pretty accurate results. But you are definitely right, if each person only submits one calibration number, the entire process is next to useless.

These are just some things to keep in mind when planning this experiment out.

Link to comment
Share on other sites

I have probably done several hundred grippers now and I can tell you I ain't doing ten on each!

Link to comment
Share on other sites

I have probably done several hundred grippers now and I can tell you I ain't doing ten on each!

:laugh :laugh

Link to comment
Share on other sites

For example, three calibrators in three separate locations get results of 150, 160, and 162 lbs. It looks like the 150 is an outlier. However, multiple results at each location might give results like: (1) 150, 152, 152, (2) 160, 155, 156, (3) 162, 155, 170. You now have averages of 151, 157, and 162 (n=9 ave: 157, sd of 6.2) with standard deviations of 1.2, 2.6 and 7.5.** Not only does data set (1) look better in terms of that location's ability to reproduce data (but may or may not be accurate), but the averages are indicating a persistent accuracy problem since there's a lot of scatter both within sets (2) and (3) as well as between all three sets. [Please keep in mind this is made-up data to make the point of the usefulness of multiple determinations.] **Hopefully all is calc'd right.

You've got the right idea with this, but you can take it even further. You can use analysis of variance and run an F test to decide whether or not these three data sets can be deemed independent (within a certain level of probability). Independence between data sets for the same gripper would indicate the tests are NOT being taken equally among locations. Going even FURTHER, use a Scheffe-Post Hoc test, you can actually determine which individual location or locations are independent of the rest of the group. HOWEVER, in order to use this method with reasonably accurate results, there would need to be at least 10 measurements taken at each location.

I never thought studying to be a mathematician would help me on the gripboard :upsidedwn .

I was wondering if you was going to chime in on this. LOL!

Thanks professor Rue! :D

Link to comment
Share on other sites

Joe, sorry if I made it sound like we had to start this over the weekend. Obviously we need to have it all planned out. That is what this thread is for.

Can I ask a question quick? Is this all because Eric's equipment is not build the same way as say, Chris Rice's? I believe most people who rate grippers do so with a device that is attached to a squat cage. Isn't Eric's hung on piece of lumber attached to the wall in his garage? I stood right beside it two years ago and can't even remember for sure. Can this design have the effect of rating a gripper 5 to 7 lbs lower?

I just want to find out how to simplify and control the matter of gripper ratings. If it is because Eric's system is different puts out a lower setting compared to everyone else's, then the problem is Eric's set-up, is it not?

Thoughts?

Can't say for sure on Eric's set-up at the moment. More information and testing is needed.

I have my hands full for now just trying to ask the right questions. LOL!

MORE as soon as I gather more facts, data, etc.

Link to comment
Share on other sites

If people want to store the results here http://www.grippersuperstore.com/rgc.aspx they are welcome to. There is a field for gripper description - if the same description is used by multiple people for the same gripper we can compare results.

Wade

I checked out the gripper superstore link. The force listed for each gripper was a lot lower than the rating given for a gripper (like a #2 is rated at 195lbs but the force was about 109). Is the force in pounds? Do you know why there is a difference?

Link to comment
Share on other sites


First, I think it would be important to get Matti from Finland involved as well. I'm not sure anyone has rated more grippers than him.

I have also rated countless grippers, and spent a great deal of time and energy on the process. I would like to participate in the experiment as well. I will work on a video of my process, which I have been meaning to do anyway.

From reports of what Eric does, my process sounds very similar to his. I have also, by matter of chance, cross-rated grippers with Eric. One example is an Elite that I borrowed from Greg Griffin. Eric had rated at 164# and I got the same number. Another example is a #4 that I sold to Paul Knight, which Eric subsequently calibrated and in that case our number was different by a few pounds.

This brings up the issue of margin of error. I have always assumed there is at least 2.5# margin of error in the process. I firmly believe from the experimenting and testing I have done that it is not possible to get the exact same number every single time.

I wanted to get a few things out there which I feel need to be clarified for the testing:

1) Oiling the gripper. I think this is the biggest factor in the margin of error. I have had over 10# spreads in ratings on the same gripper based on how recently it was oiled. Maybe more on heavy grippers that were really creaky.

2) Rounding. I personally drop the decimal from my ratings. 152.63 becomes 152. I think it's foolish to belive we can be this accurate, and it doesn't make any sense to round up. If it didn't make it to the next pound, which I consider the minimum increment, then it didn't make it.

3) Strap. I think everyone should use the exact same strap if possible. I don't believe it's enough to just try and get a 1" strap. There are different thicknesses, different materials, different lengths, and I have a hunch this all adds up and affects the margin of error. We could all agree on a readily available strap at Home Depot or something. Also, how have others made a loop? I just tied a knot. My RGC is not very high off the ground so I might have a smaller loop. Does this matter? I'm not sure, but my RGC used to be taller when mounted to an old bench and my numbers did change before and after. Not by much, but in general they are lower now. I also improved my process otherwise, so I can't say for sure it was directly related to the strap. But it could be a factor.

4) Spreader. I use a short piece of PVC so that there are no corners pressing into the strap. I realized when using a wooden block that it makes pressure points on the strap and can actually create slack as the strap goes over the handle. For example, while the weight is hanging, there is a pucker on the front side of the strap, but not the back. To me this means that the weight is not properly pulling on the end of the handle and the force is not evenly spread over the 1" thickness of the strap. I have found that the round PVC allows the strap to self-correct and the "pucker" is not as big of an issue.

These are the things I could think of off the bat. I will work on making a process video and will check back later when I have more time to collect my thoughts. biggrin.gif



Matt nailed it ... #1) Matt made it the standard to oil each gripper before caling it ... this was once we discovered how much of a difference oiling the spring actually made ... so if you cal'ed a gripper without oiling and then oiled it later before a contest of whatever, all of a sudden you really have a different gripper cause oiling can make it that much easier. This I believe is the biggest difference
Link to comment
Share on other sites

If people want to store the results here http://www.grippersuperstore.com/rgc.aspx they are welcome to. There is a field for gripper description - if the same description is used by multiple people for the same gripper we can compare results.

Wade

I checked out the gripper superstore link. The force listed for each gripper was a lot lower than the rating given for a gripper (like a #2 is rated at 195lbs but the force was about 109). Is the force in pounds? Do you know why there is a difference?

This is a difference in rating methods. RGC rates from the end of the handle. Manufacturer ratings (if they are even done) are assumed to be from the center of the handle.

Link to comment
Share on other sites

I had another comment about the RGC design that I don't hear people talk about much. I really noticed this when I started to try and calibrate the 5/8" handle grippers.

What I can't figure out is why the bottom handle is not fully supported. The only reason I can figure is that with the current design, if the handle does not stick way out, you won't be able to put the strap over the top handle because of the vertical safety bars. But this allows the gripper to tip a little, unless you've wrapped it with tape so it fits tight. But this adds yet another level of variability. How much tape, what kind of tape, maybe heavy grippers squish the tape and tip anyway...etc.

When I was working with a process for the 5/8" handle grippers, which are even more prone to tipping, this made a huge difference in the final number. If the gripper tips at all, the rating will be too heavy.

If the safety bars were angled backwards slightly, they would still serve the same purpose to protect against spin out, and the bottom handle could be fully supported and you'd have access to put the strap on the top handle. I believe this is one of the biggest changes that could be made for better consistency. Again, along with oiling and a few other things mentioned previously.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Unfortunately, your content contains terms that we do not allow. Please edit your content to remove the highlighted words below.
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use and Privacy Policy policies.