I recall, in the mid-1970s, a research student of mine who, on carrying out an analysis of her data using statistical significance testing (SST), found that the p value, for what she regarded as her most important hypothesis, was .07, which was not significant at the .05 level. The student asked whether it would be legitimate for her to change the 2-tailed test she had use to a 1-tailed test, and on receiving a negative answer from me, went away disappointed. A couple of days later she returned saying that she had decided to remove some of the "outliers" from the data set, and that when these were removed she had got a p value of .04. In her thesis she honestly reported the sequence of events, but still claimed that she had obtained a "statistically significant" result. The external examiners for her thesis accepted this as legitimate tactic.
The CAUSE Research Group is supported in part by a member initiative grant from the American Statistical Association’s Section on Statistics and Data Science Education