A Data Leakage mistake often made while using GridSearchCV / RandomizedSearchCV

Use of sci-kit learn pipeline allow us to fit only the train split while cross-validation thus avoiding data leakage while hyper-parameter tuning. We all know about the importance of creating a separate train and test set to avoid data leakage. We use the statistics of the train data only to transform our data i.e. in the language of sci-kit learn for any data transformation we ‘fit’ only on train data and not on the test/validation data. e.g.: if we want to standardize a given feature we calculate the mean and standard deviation of the train data only and use them to standardize the test/validation data. I have noticed that people after doing the above transformation directly pass the transformed dataset to the GridSearch CV or RandomizedSearch CV which internally performs similar train-test(validation) cross-validation splits of the transformed data and it has no mechanism to calculate the statistic of the train split only and leave the test/validation split out. i.e...