Last weekend I spent a considerable amount of time making origami frogs and making them jump. The assignment was to create an experiment. I choose to create a factorial design, including three main effects and a blocking factor.

This is the voice recording of my attempt to explain the below.

For each of the three variables, a “low” and “high” level was chosen as outlined by the table below.

Factor |
Description |
Levels |

Size | large (4.75″ x 9.75″)
small (2.25″ x 4.5″) |
large (+1)
small (-1) |

Paper Type | Origami (thinner)
Computer Paper (thicker) |
Origami (+1)
Computer Paper (-1) |

Paper Clip | Half of the frogs in each replicate had a paper clip protruding outward from the triangle face, directly on top of the middle seam. Half of each clip was on the frog and the other half was in the air. The same size paper clip was used on both small and large frogs. | clip used (+1)
clip not used (-1) |

All replicates were tested on the same oak table; however, the six replicates were divided into two groups. A fine layer of sand was poured on the table for the second block. I used a blocking factor because it would have been impractical to remove the sand between runs. The theory was that sand could help prevent some frogs from tumbling, therefore increasing the odds that frogs destined to land upright remain upright.

My aim was to get a power greater than 90% with a full factorial design. I considered using either three or four factor models. In order to have a three-factor model, six replicates were needed for 48 runs giving us a power of 0.92217.

I also considered using four factors with three replicates for 48 runs giving us a power of 0.920780.

For the four-factor model, I considered fractional factorial designs that would have resulted in aliasing of the main effects with three-way interactions and the two-way interactions with one another. I was unsure if two-way interactions would be significant, but was comfortable with the assumption that the three-way interaction would not be significant. I considered other variables as the potential fourth main effect and its ease to replicate. Some of the potential factors were also too objective, such as the quality of jumping crease, the degree certain frogs “slid” or “bounced”, and the order made (many pairs were made simultaneously in steps).

Eventually I decided against including a fourth main variable, opting to use a full 23 full factorial design. However, I decided to include a blocking factor for half of the design, altering the landing surface of the table. This resulted in a power level similar to the other two options, approximately 0.92. The resulting power is below, slightly better than the four-factor option with three replicates and slightly worse than the three-factor option with six replicates without a blocking factor.

The effect size and sigma were entered as 7 and 7, since ultimately it is there ratio that is important (Lesson 12 conversation).

Full Factorial Design

Design Summary

Factors: | 3 | Base Design: | 3, 8 |

Runs: | 48 | Replicates: | 6 |

Blocks: | 2 | Center pts (total): | 0 |

Block Generators: replicates

All terms are free from aliasing.

Power and Sample Size

2-Level Factorial Design

α = 0.05 Assumed standard deviation = 7

Method

Factors: | 3 | Base Design: | 3, 8 |

Blocks: | 2 |

Including blocks in model.

Results

Center Points Per Block | Effect | Reps | Total Runs | Power |

0 | 7 | 6 | 48 | 0.921853 |

As stated before, I waivered for a while between including either three or four factors in this model. I opted for simplicity, but could not resist attempting to include a blocking factor. The blocking factor helped reduce the size of the mean squared error and in general improves the power of the test, although in this case the power was already good.

Originally, I thought I might have been able to run the experiment with two or three replicates. I knew at least two were required to get an estimate of variation.

I eventually figured out that it would require six replicates if I wanted to use a 2^3 full factorial design. I considered the inclusion of center points, but did not think they would be more appropriate for 3^k factorial, central composite, and Box-Behnken designs where additional levels make measuring for curvature more important.

Logistically, I found that the quality of my frogs worsened and then improved throughout the manufacturing process. I did not know if I should have included some sort of quality variable to judge the creases or a covariate for measuring the creation order of the frogs.

The design was created in the Minitab file. In a separate Excel worksheet, I recorded the actual ten distances and landing positions for each of the 48 runs. Distances were measured in inches while the runs were coded as either a 0 (not landing on feet) or 1 (landing on feet). The averages and standard deviations were calculated for the ten repetitions for each run. These run averages and standard deviations are included in the rows in the Minitab project file.

First I ran a factorial regression for the average distance of each run regressed against the blocks (sand) and the following factors: A (size), B (paper type), and C (clip).

First, I built one model for the mean response and another to study the variability of the repetitions in each of the runs.

Below is the analysis of variance modeling the mean response of distance:

Factorial Regression: Y1 versus Blocks, A, B, C

Analysis of Variance

Source | DF | Adj SS | Adj MS | F-Value | P-Value |

Model | 8 | 712.66 | 89.083 | 3.29 | 0.006 |

Blocks | 1 | 80.60 | 80.601 | 2.97 | 0.093 |

Linear | 3 | 557.68 | 185.894 | 6.86 | 0.001 |

A | 1 | 462.52 | 462.521 | 17.06 | 0.000 |

B | 1 | 94.08 | 94.080 | 3.47 | 0.070 |

C | 1 | 1.08 | 1.080 | 0.04 | 0.843 |

2-Way Interactions | 3 | 70.86 | 23.621 | 0.87 | 0.464 |

A*B | 1 | 9.54 | 9.541 | 0.35 | 0.556 |

A*C | 1 | 61.20 | 61.201 | 2.26 | 0.141 |

B*C | 1 | 0.12 | 0.120 | 0.00 | 0.947 |

3-Way Interactions | 1 | 3.52 | 3.521 | 0.13 | 0.721 |

A*B*C | 1 | 3.52 | 3.521 | 0.13 | 0.721 |

Error | 39 | 1057.48 | 27.115 | ||

Lack-of-Fit | 7 | 74.68 | 10.668 | 0.35 | 0.925 |

Pure Error | 32 | 982.81 | 30.713 | ||

Total | 47 | 1770.15 |

Model Summary

S | R-sq | R-sq(adj) | R-sq(pred) |

5.20720 | 40.26% | 28.01% | 9.51% |

Coded Coefficients

Term | Effect | Coef | SE Coef | T-Value | P-Value | VIF |

Constant | 13.133 | 0.752 | 17.47 | 0.000 | ||

Blocks | ||||||

1 | 1.296 | 0.752 | 1.72 | 0.093 | 1.00 | |

A | -6.208 | -3.104 | 0.752 | -4.13 | 0.000 | 1.00 |

B | -2.800 | -1.400 | 0.752 | -1.86 | 0.070 | 1.00 |

C | 0.300 | 0.150 | 0.752 | 0.20 | 0.843 | 1.00 |

A*B | 0.892 | 0.446 | 0.752 | 0.59 | 0.556 | 1.00 |

A*C | 2.258 | 1.129 | 0.752 | 1.50 | 0.141 | 1.00 |

B*C | 0.100 | 0.050 | 0.752 | 0.07 | 0.947 | 1.00 |

A*B*C | -0.542 | -0.271 | 0.752 | -0.36 | 0.721 | 1.00 |

Regression Equation in Uncoded Units

Y1 | = | 13.133 – 3.104 A – 1.400 B + 0.150 C + 0.446 A*B + 1.129 A*C + 0.050 B*C – 0.271 A*B*C |

*Equation averaged over blocks.*

Alias Structure

Factor | Name |

A | A |

B | B |

C | C |

Aliases |

I |

Block 1 |

A |

B |

C |

AB |

AC |

BC |

ABC |

Fits and Diagnostics for Unusual Observations

Obs | Y1 | Fit | Resid | Std Resid | |

30 | 5.20 | 20.68 | -15.48 | -3.30 | R |

*R Large residual*

The ANOVA table and Pareto chart were used to simplify this model. I began by removing the three-way interaction term and reallocating those degrees of freedom to the error term. After refitting the model, the two-way interaction terms were still not significant. The two-way interaction terms were removed, and the model was refit with only the main effects. Using the contour plots for distance, graphically we see slight curvature, between factors A (size) and B (paper type) and the interaction of factors A (size) and C (paper clip), but not enough to indicate significance.

The main effect for factor C (clip) was still not significant at any reasonable level for alpha. It was removed from the model.

Factor A (size) is the only factor significant when we set alpha at 0.05. The blocking factor (sand) and factor B (paper type) would be significant if we used our power goal of 0.92. They are kept in the model for this reason.

Below we fail to reject the null hypothesis that the model has lack of fit. Most of the error is attributed to “pure error”. There are many error degrees of freedom thanks to the six repetitions of each run.

The predictive power of this model is poor to average, with a R-squared adjusted 31.63%. My poor folds or clumsiness may have played a role, although the consistency of having one operator should have minimized the effects of these mistakes made on any of the factor levels.

Factorial Regression: Y1 versus Blocks, A, B

Analysis of Variance

Source | DF | Adj SS | Adj MS | F-Value | P-Value |

Model | 3 | 637.20 | 212.40 | 8.25 | 0.000 |

Blocks | 1 | 80.60 | 80.60 | 3.13 | 0.084 |

Linear | 2 | 556.60 | 278.30 | 10.81 | 0.000 |

A | 1 | 462.52 | 462.52 | 17.96 | 0.000 |

B | 1 | 94.08 | 94.08 | 3.65 | 0.062 |

Error | 44 | 1132.94 | 25.75 | ||

Lack-of-Fit | 12 | 150.14 | 12.51 | 0.41 | 0.950 |

Pure Error | 32 | 982.81 | 30.71 | ||

Total | 47 | 1770.15 |

Model Summary

S | R-sq | R-sq(adj) | R-sq(pred) |

5.07432 | 36.00% | 31.63% | 23.83% |

Coded Coefficients

Term | Effect | Coef | SE Coef | T-Value | P-Value | VIF |

Constant | 13.133 | 0.732 | 17.93 | 0.000 | ||

Blocks | ||||||

1 | 1.296 | 0.732 | 1.77 | 0.084 | 1.00 | |

A | -6.208 | -3.104 | 0.732 | -4.24 | 0.000 | 1.00 |

B | -2.800 | -1.400 | 0.732 | -1.91 | 0.062 | 1.00 |

Regression Equation in Uncoded Units

Y1 | = | 13.133 – 3.104 A – 1.400 B |

*Equation averaged over blocks.*

Alias Structure

Factor | Name |

A | A |

B | B |

C | C |

Aliases |

I |

Block 1 |

A |

B |

Fits and Diagnostics for Unusual Observations

Obs | Y1 | Fit | Resid | Std Resid | |

30 | 5.20 | 18.93 | -13.73 | -2.83 | R |

*R Large residual*

The variability of the 10 repetitions for each of the 48 runs is now analyzed with the same response variable as distance.

Analysis of Variability: s1 versus Blocks, A, B, C

Method

Estimation | Least squares |

Analysis of Variance for Ln(s1)

Source | DF | Adj SS | Adj MS | F-Value | P-Value |

Model | 8 | 902.77 | 112.846 | 2.18 | 0.050 |

Blocks | 1 | 6.08 | 6.079 | 0.12 | 0.733 |

Linear | 3 | 426.84 | 142.281 | 2.75 | 0.055 |

A | 1 | 237.93 | 237.928 | 4.60 | 0.038 |

B | 1 | 188.78 | 188.783 | 3.65 | 0.063 |

C | 1 | 0.13 | 0.133 | 0.00 | 0.960 |

2-Way Interactions | 3 | 447.55 | 149.182 | 2.89 | 0.048 |

A*B | 1 | 229.49 | 229.495 | 4.44 | 0.042 |

A*C | 1 | 50.35 | 50.347 | 0.97 | 0.330 |

B*C | 1 | 167.71 | 167.705 | 3.24 | 0.079 |

3-Way Interactions | 1 | 22.30 | 22.303 | 0.43 | 0.515 |

A*B*C | 1 | 22.30 | 22.303 | 0.43 | 0.515 |

Error | 39 | 2015.58 | 51.682 | ||

Lack-of-Fit | 7 | 419.98 | 59.998 | 1.20 | 0.329 |

Pure Error | 32 | 1595.60 | 49.862 | ||

Total | 47 | 2918.35 |

Model Summary for Ln(s1)

S | R-sq | R-sq(adj) | R-sq(pred) |

7.18899 | 30.93% | 16.77% | 0.00% |

Coded Coefficients for Ln(s1)

Term | Effect | Ratio Effect | Coef | SE Coef | T-Value | P-Value | VIF |

Constant | 2.351 | 0.108 | 21.74 | 0.000 | |||

Blocks | |||||||

1 | -0.037 | 0.108 | -0.34 | 0.733 | 1.00 | ||

A | -0.464 | 0.629 | -0.232 | 0.108 | -2.15 | 0.038 | 1.00 |

B | -0.413 | 0.661 | -0.207 | 0.108 | -1.91 | 0.063 | 1.00 |

C | -0.011 | 0.989 | -0.005 | 0.108 | -0.05 | 0.960 | 1.00 |

A*B | -0.456 | 0.634 | -0.228 | 0.108 | -2.11 | 0.042 | 1.00 |

A*C | -0.214 | 0.808 | -0.107 | 0.108 | -0.99 | 0.330 | 1.00 |

B*C | -0.390 | 0.677 | -0.195 | 0.108 | -1.80 | 0.079 | 1.00 |

A*B*C | 0.142 | 1.153 | 0.071 | 0.108 | 0.66 | 0.515 | 1.00 |

Regression Equation in Uncoded Units

Ln(s1) | = | 2.351 – 0.232 A – 0.207 B – 0.005 C – 0.228 A*B – 0.107 A*C – 0.195 B*C + 0.071 A*B*C |

*Equation averaged over blocks.*

Alias Structure

Factor | Name |

A | A |

B | B |

C | C |

Aliases |

I |

Block 1 |

A |

B |

C |

AB |

AC |

BC |

ABC |

Fits and Diagnostics for Unusual Observations

Original Response

Obs | s1 | Fit | Ratio Residual |

18 | 0.989 | 4.417 | 0.224 |

Fits and Diagnostics for Unusual Observations

Transformed Response

Obs | Ln(s1) | Ln(Fit) | Ln(Resid) | Std Ln(Resid) | |

18 | -0.011 | 1.485 | -1.497 | -2.22 | R |

*R Large residual*

The variability in the distance data of some of these factors, and their interactions, is significant in the full model. I will refit the model to see if anything changes, first by dropping out the three-way interaction.

Eventually we drop out factor C (clip) and all interactions that involve it. We keep factor B (paper type) because the interaction A*B (size * paper type) is significant when alpha is set at 5%. Factor B (paper type) remains in the model because of the hierarchy principle.

Analysis of Variability: s1 versus Blocks, A, B

Method

Estimation | Least squares |

Analysis of Variance for Ln(s1)

Source | DF | Adj SS | Adj MS | F-Value | P-Value |

Model | 4 | 662.28 | 165.571 | 3.16 | 0.023 |

Blocks | 1 | 6.08 | 6.079 | 0.12 | 0.735 |

Linear | 2 | 426.71 | 213.355 | 4.07 | 0.024 |

A | 1 | 237.93 | 237.928 | 4.53 | 0.039 |

B | 1 | 188.78 | 188.783 | 3.60 | 0.065 |

2-Way Interactions | 1 | 229.49 | 229.495 | 4.37 | 0.042 |

A*B | 1 | 229.49 | 229.495 | 4.37 | 0.042 |

Error | 43 | 2256.07 | 52.467 | ||

Lack-of-Fit | 11 | 660.47 | 60.043 | 1.20 | 0.324 |

Pure Error | 32 | 1595.60 | 49.862 | ||

Total | 47 | 2918.35 |

Model Summary for Ln(s1)

S | R-sq | R-sq(adj) | R-sq(pred) |

7.24339 | 22.69% | 15.50% | 3.67% |

Coded Coefficients for Ln(s1)

Term | Effect | Ratio Effect | Coef | SE Coef | T-Value | P-Value | VIF |

Constant | 2.351 | 0.109 | 21.57 | 0.000 | |||

Blocks | |||||||

1 | -0.037 | 0.109 | -0.34 | 0.735 | 1.00 | ||

A | -0.464 | 0.629 | -0.232 | 0.109 | -2.13 | 0.039 | 1.00 |

B | -0.413 | 0.661 | -0.207 | 0.109 | -1.90 | 0.065 | 1.00 |

A*B | -0.456 | 0.634 | -0.228 | 0.109 | -2.09 | 0.042 | 1.00 |

Regression Equation in Uncoded Units

Ln(s1) | = | 2.351 – 0.232 A – 0.207 B – 0.228 A*B |

*Equation averaged over blocks.*

Alias Structure

Factor | Name |

A | A |

B | B |

C | C |

Aliases |

I |

Block 1 |

A |

B |

AB |

Fits and Diagnostics for Unusual Observations

Original Response

Obs | s1 | Fit | Ratio Residual |

18 | 0.989 | 5.593 | 0.177 |

30 | 2.400 | 12.490 | 0.192 |

Fits and Diagnostics for Unusual Observations

Transformed Response

Obs | Ln(s1) | Ln(Fit) | Ln(Resid) | Std Ln(Resid) | |

18 | -0.011 | 1.721 | -1.733 | -2.42 | R |

30 | 0.875 | 2.525 | -1.649 | -2.31 | R |

*R Large residual*

Using the ANOVA table and the Pareto chart, we see that the variances of factor A (frog size) and the interaction of factors A (frog size) and B (paper type) is significant across the repetitions for each run.

Observing the residual analysis, there are no serious violations of our assumptions that the error terms are normally distributed with a mean of zero and a variance of one. The data was run in random order and the residual versus order plot indicates no issues. Our histogram is okay, but does show us that there are outliers. In this case, the outliers are “duds” or frogs whose seams were so hopelessly created that they stood no shot at jumping very far.

Binary Logistic Regression: Successes versus A, B, C, Blocks

Method

Link function | Logit |

Categorical predictor coding | (1, 0) |

Rows used | 48 |

Response Information

Variable | Value | Count | Event Name |

Successes | Event | 230 | Event |

Non-event | 250 | ||

Trials | Total | 480 |

Deviance Table

Source | DF | Adj Dev | Adj Mean | Chi-Square | P-Value |

Regression | 4 | 25.388 | 6.3470 | 25.39 | 0.000 |

A | 1 | 0.880 | 0.8795 | 0.88 | 0.348 |

B | 1 | 0.880 | 0.8795 | 0.88 | 0.348 |

C | 1 | 0.880 | 0.8795 | 0.88 | 0.348 |

Blocks | 1 | 22.878 | 22.8784 | 22.88 | 0.000 |

Error | 43 | 95.759 | 2.2270 | ||

Total | 47 | 121.147 |

Model Summary

Deviance R-Sq | Deviance R-Sq(adj) | AIC |

20.96% | 17.65% | 649.20 |

Coefficients

Term | Coef | SE Coef | VIF |

Constant | -0.532 | 0.134 | |

A | -0.0880 | 0.0939 | 1.00 |

B | 0.0880 | 0.0939 | 1.00 |

C | -0.0880 | 0.0939 | 1.00 |

Blocks | |||

2 | 0.887 | 0.188 | 1.00 |

Odds Ratios for Continuous Predictors

Odds Ratio | 95% CI | |

A | 0.9158 | (0.7619, 1.1008) |

B | 1.0920 | (0.9085, 1.3125) |

C | 0.9158 | (0.7619, 1.1008) |

Odds Ratios for Categorical Predictors

Level A | Level B | Odds Ratio | 95% CI |

Blocks | |||

2 | 1 | 2.4286 | (1.6806, 3.5094) |

*Odds ratio for level A relative to level B*

Regression Equation

P(Event) | = | exp(Y’)/(1 + exp(Y’)) |

Blocks | |||

1 | Y’ | = | -0.5316 – 0.08798 A + 0.08798 B – 0.08798 C |

2 | Y’ | = | 0.3557 – 0.08798 A + 0.08798 B – 0.08798 C |

Goodness-of-Fit Tests

Test | DF | Chi-Square | P-Value |

Deviance | 43 | 95.76 | 0.000 |

Pearson | 43 | 84.58 | 0.000 |

Hosmer-Lemeshow | 6 | 10.81 | 0.094 |

Fits and Diagnostics for Unusual Observations

Obs | Observed Probability | Fit | Resid | Std Resid | |

6 | 1.0000 | 0.5665 | 3.3712 | 3.57 | R |

13 | 0.9000 | 0.5665 | 2.3235 | 2.46 | R |

15 | 0.2000 | 0.5229 | -2.1039 | -2.23 | R |

16 | 0.2000 | 0.5229 | -2.1039 | -2.23 | R |

17 | 0.3000 | 0.6091 | -1.9770 | -2.09 | R |

18 | 1.0000 | 0.5665 | 3.3712 | 3.57 | R |

22 | 0.3000 | 0.6091 | -1.9770 | -2.09 | R |

33 | 0.1000 | 0.3909 | -2.0737 | -2.19 | R |

37 | 0.8000 | 0.3909 | 2.6467 | 2.80 | R |

*R Large residual*

The Pearson Chi-square statistic and associated p-value indicates that we reject the null hypothesis that the model fits the data, concluding the alternative hypothesis that the model does not fit. (Lesson 2.4 of STAT504).

*H*_{0}: the model M_{0} fits

*
H_{A}*: the model M

_{0}does not fit (or, some other model M

_{A}fits)

When we reduce the model for the success response ratio, our Chi-square statistics and associated p-values do not improve.

I also tried a factorial regression with landing success as the response. None of the p-values associated with our main factors were significant. The p-value associated with the blocks should only be used for when considering whether to include a blocking factor. The sand as a blocking factor did impact the performance of the launches.

Factorial Regression: Successes versus Blocks, A, B, C

Analysis of Variance

Source | DF | Adj SS | Adj MS | F-Value | P-Value |

Model | 8 | 80.917 | 10.1146 | 2.13 | 0.056 |

Blocks | 1 | 56.333 | 56.3333 | 11.88 | 0.001 |

Linear | 3 | 6.250 | 2.0833 | 0.44 | 0.726 |

A | 1 | 2.083 | 2.0833 | 0.44 | 0.511 |

B | 1 | 2.083 | 2.0833 | 0.44 | 0.511 |

C | 1 | 2.083 | 2.0833 | 0.44 | 0.511 |

2-Way Interactions | 3 | 4.250 | 1.4167 | 0.30 | 0.826 |

A*B | 1 | 0.083 | 0.0833 | 0.02 | 0.895 |

A*C | 1 | 4.083 | 4.0833 | 0.86 | 0.359 |

B*C | 1 | 0.083 | 0.0833 | 0.02 | 0.895 |

3-Way Interactions | 1 | 14.083 | 14.0833 | 2.97 | 0.093 |

A*B*C | 1 | 14.083 | 14.0833 | 2.97 | 0.093 |

Error | 39 | 185.000 | 4.7436 | ||

Lack-of-Fit | 7 | 58.333 | 8.3333 | 2.11 | 0.072 |

Pure Error | 32 | 126.667 | 3.9583 | ||

Total | 47 | 265.917 |

Model Summary

S | R-sq | R-sq(adj) | R-sq(pred) |

2.17798 | 30.43% | 16.16% | 0.00% |

Coded Coefficients

Term | Effect | Coef | SE Coef | T-Value | P-Value | VIF |

Constant | 4.792 | 0.314 | 15.24 | 0.000 | ||

Blocks | ||||||

1 | -1.083 | 0.314 | -3.45 | 0.001 | 1.00 | |

A | -0.417 | -0.208 | 0.314 | -0.66 | 0.511 | 1.00 |

B | 0.417 | 0.208 | 0.314 | 0.66 | 0.511 | 1.00 |

C | -0.417 | -0.208 | 0.314 | -0.66 | 0.511 | 1.00 |

A*B | 0.083 | 0.042 | 0.314 | 0.13 | 0.895 | 1.00 |

A*C | 0.583 | 0.292 | 0.314 | 0.93 | 0.359 | 1.00 |

B*C | 0.083 | 0.042 | 0.314 | 0.13 | 0.895 | 1.00 |

A*B*C | 1.083 | 0.542 | 0.314 | 1.72 | 0.093 | 1.00 |

Regression Equation in Uncoded Units

Successes | = | 4.792 – 0.208 A + 0.208 B – 0.208 C + 0.042 A*B + 0.292 A*C + 0.042 B*C + 0.542 A*B*C |

*Equation averaged over blocks.*

Alias Structure

Factor | Name |

A | A |

B | B |

C | C |

Aliases |

I |

Block 1 |

A |

B |

C |

AB |

AC |

BC |

ABC |

Fits and Diagnostics for Unusual Observations

Obs | Successes | Fit | Resid | Std Resid | |

37 | 8.000 | 3.083 | 4.917 | 2.50 | R |

*R Large residual*

Through the response optimization method, we would choose the following levels of each variable. In this sense optimization is considered the longest possible jumps and the greatest odds of the frog landing on its feet. Both response variables are of equal importance.

Factor A (size) is optimized at the smaller size for both response variables.

Factor B (paper) results are indecisive. To optimize jump distance alone we would use frogs made of computer paper. To optimize the odds of the frogs landing on their feet we would use the origami paper. Since both responses are equally important, I analyze the main effects plots for each response in more detail. I am trying to find if the fitted mean range for one is much larger than the other. However, both responses have relatively short fitted means, which may indicate that the results for this factor are indeed indecisive and may not need to be considered when we are optimizing the response.

Factor C (paper clip) is optimized for both responses at the low level. Frogs without paper clips jump further and have a higher odd of landing on their feet, according to this optimization model.

However, in the factorial regression model, to optimize jumping distance, we settled on a model that included only the terms for factors A (size) and factor B (paper type). The blocking factor was also included.

Further investigation could be done to discern if factor C (clip) impacts small frogs versus large frogs differently. My theory is that the relative weight of the paper clip on the small frogs is higher than that on the large frogs, therefore possibly impacting the jumping distance and successful lands of the two sized frogs differently.

Since there is no consensus on the optimization of factor B (paper type). Our final model may indeed only include one main effect for factor A (size), concluding that little frogs jump further and have greater odds of landing on their feet, considering the variation blocked by the surface of the batch.