{"type":"rich","version":"1.0","provider_name":"Transistor","provider_url":"https://transistor.fm","author_name":"Science Tech Brief By HackerNoon","title":"Navigating the Maze of Multiple Hypotheses Testing—Part 2: Practical Implementation ","html":"<iframe width=\"100%\" height=\"180\" frameborder=\"no\" scrolling=\"no\" seamless src=\"https://share.transistor.fm/e/94436fe7\"></iframe>","width":"100%","height":180,"duration":221,"description":"\n        This story was originally published on HackerNoon at: https://hackernoon.com/navigating-the-maze-of-multiple-hypotheses-testingpart-2-practical-implementation.\n             In this article, we will explore practical implementation with Python code and interpretation of the results.  \n            Check more stories related to science at: https://hackernoon.com/c/science.\n            You can also check exclusive content about #statistics, #python, #data-analysis, #bonferroni-correction, #hypothesis-testing, #statistical-significance, #p-values, #data-interpretation,  and more.\n            \n            \n            This story was written by: @vabars. Learn more about this writer by checking @vabars's about page,\n            and for more stories, please visit hackernoon.com.\n            \n                \n                \n                In this article, we will explore practical implementation with Python code and interpretation of the results. The Bonferroni correction makes the p-values higher to control for the increased risk of Type I errors (false positives) that come with multiple testing. In this case, the first (`True`) and last (` true`) hypotheses are rejected.\n        \n        ","thumbnail_url":"https://img.transistorcdn.com/S66fL9skYMhlajDauLWqBH_bXds_u8JsPbvAZlh45OA/rs:fill:0:0:1/w:400/h:400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9zaG93/LzQxMjczLzE2ODM1/ODI0MjQtYXJ0d29y/ay5qcGc.webp","thumbnail_width":300,"thumbnail_height":300}