References

[1] Abate, P. and Cosmo, R.D. 2011. Predicting upgrade failures using dependency analysis. 2011 IEEE 27th international conference on data engineering workshops (Apr. 2011).

[2] Abate, P. et al. 2009. Strong dependencies between software components. 2009 3rd international symposium on empirical software engineering and measurement (Oct. 2009).

[3] Abdalkareem, R. et al. 2017. Why do developers use trivial packages? An empirical case study on npm. Proceedings of the 2017 11th joint meeting on foundations of software engineering - ESEC/FSE 2017 (2017).

[4] Adams, B. and McIntosh, S. 2016. Modern release engineering in a nutshell–why researchers should care. Software analysis, evolution, and reengineering (saner), 2016 ieee 23rd international conference on (2016), 78–90.

[5] Ali, M. et al. 2017. Same app, different app stores: A comparative study. Proceedings of the 4th international conference on mobile software engineering and systems (2017), 79–90.

[6] Anchiêta, R.T. and Moura, R.S. 2017. Exploring unsupervised learning towards extractive summarization of user reviews. Proceedings of the 23rd brazillian symposium on multimedia and the web (2017), 217–220.

[7] AppleInsider 2008. Apple’s app store launches with more than 500 apps. http://appleinsider.com/articles/08/07/10/apples_app_store_launches_with_more_than_500_apps.

[8] Aralikatte, R. et al. 2018. Fault in your stars: An analysis of android app reviews. Proceedings of the acm india joint international conference on data science and management of data (2018), 57–66.

[9] Arisholm, E. et al. 2010. A systematic and comprehensive investigation of methods to build and evaluate fault prediction models. Journal of Systems and Software. 83, 1 (2010), 2–17.

[10] Atifi, M. et al. 2017. A comparative study of software testing techniques.

[11] Bacchelli, A. and Bird, C. 2013. Expectations, outcomes, and challenges of modern code review. Proceedings of the 2013 international conference on software engineering (2013), 712–721.

[12] Baltes, S. et al. 2018. (No) influence of continuous integration on the commit activity in github projects. arXiv preprint arXiv:1802.08441. (2018).

[13] Banerjee, A. et al. 2018. Energypatch: Repairing resource leaks to improve energy-efficiency of android apps. IEEE Transactions on Software Engineering. 44, 5 (2018), 470–490.

[14] Bao, L. et al. 2016. How android app developers manage power consumption?: An empirical study by mining power management commits. Proceedings of the 13th international conference on mining software repositories (2016), 37–48.

[15] Baum, T. et al. 2017. The choice of code review process: A survey on the state of the practice. International conference on product-focused software process improvement (2017), 111–127.

[16] Baum, T. et al. 2016. A faceted classification scheme for change-based industrial code review processes. Software quality, reliability and security (qrs), 2016 ieee international conference on (2016), 74–85.

[17] Bavota, G. et al. 2014. How the apache community upgrades dependencies: An evolutionary study. Empirical Software Engineering. 20, 5 (Sep. 2014), 1275–1317.

[18] Baysal, O. et al. 2016. Investigating technical and non-technical factors influencing modern code review. Empirical Software Engineering. 21, 3 (2016), 932–959.

[19] Baysal, O. et al. 2013. The influence of non-technical factors on code review. Reverse engineering (wcre), 2013 20th working conference on (2013), 122–131.

[20] Beck, K. 2003. Test-driven development: By example. Addison-Wesley Professional.

[21] Beller, M. et al. 2014. Modern code reviews in open-source projects: Which problems do they fix? Proceedings of the 11th working conference on mining software repositories (2014), 202–211.

[22] Beller, M. et al. 2017. Developer testing in the ide: Patterns, beliefs, and behavior. IEEE Transactions on Software Engineering. 1 (2017), 1–1.

[23] Beller, M. et al. 2015. How (much) do developers test? Proceedings of the 37th international conference on software engineering - volume 2 (Piscataway, NJ, USA, 2015), 559–562.

[24] Beller, M. et al. 2017. Oops, my tests broke the build: An explorative analysis of travis ci with github. Mining software repositories (msr), 2017 ieee/acm 14th international conference on (2017), 356–367.

[25] Beller, M. et al. 2017. Travistorrent: Synthesizing travis ci and github for full-stack research on continuous integration. Proceedings of the 14th international conference on mining software repositories (2017), 447–450.

[26] Beller, M. et al. 2015. When, how, and why developers (do not) test in their ides. 2015 10th joint meeting of the european software engineering conference and the acm sigsoft symposium on the foundations of software engineering, esec/fse 2015 - proceedings (2015), 179–190.

[27] Bevan, J. et al. 2005. Facilitating software evolution research with kenyon. ESEC/fse’05 - proceedings of the joint 10th european software engineering conference (esec) and 13th acm sigsoft symposium on the foundations of software engineering (fse-13) (2005), 177–186.

[28] Bird, C. and Zimmermann, T. 2017. Predicting software build errors. Google Patents.

[29] Bird, C. et al. 2015. Lessons learned from building and deploying a code review analytics platform. Proceedings of the 12th working conference on mining software repositories (2015), 191–201.

[30] Bisong, E. et al. 2017. Built to last or built too fast?: Evaluating prediction models for build times. Proceedings of the 14th international conference on mining software repositories (2017), 487–490.

[31] Blincoe, K. et al. 2015. Ecosystems in GitHub and a method for ecosystem identification using reference coupling. 2015 IEEE/ACM 12th working conference on mining software repositories (May 2015).

[32] Bogart, C. et al. 2016. How to break an API: Cost negotiation and community values in three software ecosystems. Proceedings of the 2016 24th ACM SIGSOFT international symposium on foundations of software engineering - FSE 2016 (2016).

[33] Bosu, A. and Carver, J.C. 2013. Impact of peer code review on peer impression formation: A survey. Empirical software engineering and measurement, 2013 acm/ieee international symposium on (2013), 133–142.

[34] Bouwers, E. et al. 2012. Getting what you measure. Commun. ACM. 55, 7 (Jul. 2012), 54–59.

[35] Bowring, J. and Hegler, H. 2014. Obsidian: Pattern-based unit test implementations. Journal of Software Engineering and Applications. 7, 02 (2014), 94.

[36] Buse, R.P. and Zimmermann, T. 2010. Analytics for software development. Proceedings of the fse/sdp workshop on future of software engineering research (New York, NY, USA, 2010), 77–80.

[37] Castelluccio, M. et al. 2017. Is it safe to uplift this patch? An empirical study on mozilla firefox. Proceedings - 2017 IEEE International Conference on Software Maintenance and Evolution, ICSME 2017 (2017), 411–421.

[38] Catal, C. 2011. Software fault prediction: A literature review and current trends. Expert Systems with Applications. 38, 4 (2011), 4626–4636.

[39] Catal, C. and Diri, B. 2009. A systematic review of software fault prediction studies.

[40] Catal, C. and Diri, B. 2009. Investigating the effect of dataset size, metrics sets, and feature selection techniques on software fault prediction problem. Information Sciences. 179, 8 (2009), 1040–1058.

[41] Cesar Brandão Gomes da Silva, A. et al. 2017. Frequent releases in open source software: A systematic review. Information. 8, 3 (2017), 109.

[42] Chen, H. et al. 2017. Toward detecting collusive ranking manipulation attackers in mobile app markets. Proceedings of the 2017 acm on asia conference on computer and communications security (2017), 58–70.

[43] Ciolkowski, M. et al. 2003. Software reviews: The state of the practice. IEEE software. 6 (2003), 46–51.

[44] Claes, M. et al. 2017. Abnormal working hours: Effect of rapid releases and implications to work content. IEEE International Working Conference on Mining Software Repositories (2017), 243–247.

[45] Claes, M. et al. 2015. A historical analysis of debian package incompatibilities. 2015 IEEE/ACM 12th working conference on mining software repositories (May 2015).

[46] Cohen, J. 2010. Modern code review. Making Software: What Really Works, and Why We Believe It. (2010), 329–336.

[47] Constantinou, E. and Mens, T. 2017. An empirical comparison of developer retention in the RubyGems and npm software ecosystems. Innovations in Systems and Software Engineering. 13, 2-3 (Aug. 2017), 101–115.

[48] Costa, D.A. da et al. 2014. An empirical study of delays in the integration of addressed issues. 2014 ieee international conference on software maintenance and evolution (2014), 281–290.

[49] Costa, D.A. da et al. 2016. The impact of switching to a rapid release cycle on the integration delay of addressed issues - an empirical study of the mozilla firefox project. 2016 ieee/acm 13th working conference on mining software repositories (msr) (2016), 374–385.

[50] Cox, J. et al. 2015. Measuring dependency freshness in software systems. 2015 IEEE/ACM 37th IEEE international conference on software engineering (May 2015).

[51] Cruz, L. and Abreu, R. 2017. Performance-based guidelines for energy efficient mobile applications. Mobile software engineering and systems (mobilesoft), 2017 ieee/acm 4th international conference on (2017), 46–57.

[52] Cruz, L. and Abreu, R. 2018. Using automatic refactoring to improve energy efficiency of android apps. arXiv preprint arXiv:1803.05889. (2018).

[53] Czerwonka, J. et al. 2015. Code reviews do not find bugs: How the current code review best practice slows us down. Proceedings of the 37th international conference on software engineering-volume 2 (2015), 27–28.

[54] Decan, A. et al. 2017. An empirical comparison of dependency issues in OSS packaging ecosystems. 2017 IEEE 24th international conference on software analysis, evolution and reengineering (SANER) (Feb. 2017).

[55] Decan, A. et al. 2018. An empirical comparison of dependency network evolution in seven software packaging ecosystems. Empirical Software Engineering. (Feb. 2018).

[56] Di Nucci, D. et al. 2018. A developer centered bug prediction model. IEEE Transactions on Software Engineering. 44, 1 (2018), 5–24.

[57] Di Nucci, D. et al. 2017. Petra: A software-based tool for estimating the energy profile of android applications. Proceedings of the 39th international conference on software engineering companion (2017), 3–6.

[58] Di Nucci, D. et al. 2017. Software-based energy profiling of android apps: Simple, efficient and reliable? Software analysis, evolution and reengineering (saner), 2017 ieee 24th international conference on (2017), 103–114.

[59] Di Sorbo, A. et al. 2016. What would users change in my app? Summarizing app reviews for recommending software changes. Proceedings of the 2016 24th acm sigsoft international symposium on foundations of software engineering (2016), 499–510.

[60] Dietrich, J. et al. 2014. Broken promises: An empirical study into evolution problems in java programs caused by library upgrades. 2014 software evolution week - IEEE conference on software maintenance, reengineering, and reverse engineering (CSMR-WCRE) (Feb. 2014).

[61] Dittrich, Y. 2014. Software engineering beyond the project sustaining software ecosystems. Information and Software Technology. 56, 11 (Nov. 2014), 1436–1456.

[62] Dulz, W. 2013. Model-based strategies for reducing the complexity of statistically generated test suites. International conference on software quality (2013), 89–103.

[63] Dyck, A. et al. 2015. Towards definitions for release engineering and devops. Release engineering (releng), 2015 ieee/acm 3rd international workshop on (2015), 3–3.

[64] D’Ambros, M. et al. 2010. An extensive comparison of bug prediction approaches. Proceedings - International Conference on Software Engineering. (2010), 31–41.

[65] D’Ambros, M. et al. 2012. Evaluating defect prediction approaches: A benchmark and an extensive comparison. Empirical Software Engineering. 17, 4-5 (2012), 531–577.

[66] Eick, S.G. et al. 2001. Does code decay? Assessing the evidence from change management data. IEEE Transactions on Software Engineering. 27, 1 (Jan. 2001), 1–12.

[67] Fagan, M. 2002. Design and code inspections to reduce errors in program development. Software pioneers. Springer. 575–607.

[68] Fowler, M. and Foemmel, M. 2006. Continuous integration. Thought-Works) http://www. thoughtworks. com/Continuous Integration. pdf. 122, (2006), 14.

[69] Fujibayashi, D. et al. 2017. Does the release cycle of a library project influence when it is adopted by a client project? SANER 2017 - 24th IEEE International Conference on Software Analysis, Evolution, and Reengineering (2017), 569–570.

[70] Gao, C. et al. 2018. Online app review analysis for identifying emerging issues. 2018 ieee/acm 40th international conference on software engineering (icse) (2018), 48–58.

[71] Garousi, V. and Zhi, J. 2013. A survey of software testing practices in canada. Journal of Systems and Software. 86, 5 (2013), 1354–1376.

[72] Georgiou, S. et al. 2018. What are your programming language’s energy-delay implications? Proceedings of the 15th international conference on mining software repositories (2018), 303–313.

[73] Giger, E. et al. 2012. Method-level bug prediction. Proceedings of the acm-ieee international symposium on empirical software engineering and measurement (New York, NY, USA, 2012), 171–180.

[74] Giger, E. et al. 2011. Comparing fine-grained source code changes and code churn for bug prediction. Proceedings of the 8th working conference on mining software repositories (New York, NY, USA, 2011), 83–92.

[75] Gousios, G. et al. 2014. An exploratory study of the pull-based software development model. Proceedings of the 36th international conference on software engineering (2014), 345–355.

[76] Greiler, M. et al. 2013. Strategies for avoiding text fixture smells during software evolution. IEEE international working conference on mining software repositories (2013), 387–396.

[77] Gyimothy, T. et al. 2005. Empirical validation of object-oriented metrics on open source software for fault prediction. IEEE Transactions on Software Engineering. 31, 10 (Oct. 2005), 897–910.

[78] Hall, T. et al. 2012. A Systematic Literature Review on Fault Prediction Performance in Software Engineering. IEEE Transactions on Software Engineering. 38, 6 (Nov. 2012), 1276–1304.

[79] Hamasaki, K. et al. 2013. Who does what during a code review? Datasets of oss peer review repositories. Proceedings of the 10th working conference on mining software repositories (2013), 49–52.

[80] Hassan, A.E. 2009. Predicting faults using the complexity of code changes. Proceedings of the 31st international conference on software engineering (Washington, DC, USA, 2009), 78–88.

[81] Hassan, A.E. and Xie, T. 2010. Software intelligence: The future of mining software engineering data. Proceedings of the fse/sdp workshop on future of software engineering research (New York, NY, USA, 2010), 161–166.

[82] Hassan, F. and Wang, X. 2018. HireBuild: An automatic approach to history-driven repair of build scripts. Proceedings of the 40th international conference on software engineering (2018), 1078–1089.

[83] Hassan, S. et al. 2018. Studying the dialogue between users and developers of free apps in the google play store. Empirical Software Engineering. 23, 3 (2018), 1275–1312.

[84] Hejderup, J. et al. 2018. Software ecosystem call graph for dependency management. Proceedings of the 40th international conference on software engineering new ideas and emerging results - ICSE-NIER 18 (2018).

[85] Hemmati, H. and Sharifi, F. 2018. Investigating nlp-based approaches for predicting manual test case failure. Proceedings - 2018 ieee 11th international conference on software testing, verification and validation, icst 2018 (2018), 309–319.

[86] Hilton, M. et al. 2016. Usage, costs, and benefits of continuous integration in open-source projects. Proceedings of the 31st ieee/acm international conference on automated software engineering (2016), 426–437.

[87] Hora, A. et al. 2016. How do developers react to API evolution? A large-scale empirical study. Software Quality Journal. 26, 1 (Oct. 2016), 161–191.

[88] Hu, H. et al. 2018. Studying the consistency of star ratings and reviews of popular free hybrid android and iOS apps. Empirical Software Engineering. (2018), 1–26.

[89] Hurdugaci, V. and Zaidman, A. 2012. Aiding software developers to maintain developer tests. 2012 16th european conference on software maintenance and reengineering (March 2012), 11–20.

[90] Izquierdo, D. et al. 2018. Software development analytics for xen: Why and how. IEEE Software. (2018), 1–1.

[91] Jansen, S. 2014. Measuring the health of open source software ecosystems: Beyond the scope of project health. Information and Software Technology. 56, 11 (Nov. 2014), 1508–1519.

[92] Jha, N. and Mahmoud, A. 2017. Mining user requirements from application store reviews using frame semantics. International working conference on requirements engineering: Foundation for software quality (2017), 273–287.

[93] Jiang, Y. et al. 2008. Techniques for evaluating fault prediction models. Empirical Software Engineering. 13, 5 (Oct. 2008), 561–595.

[94] Karvonen, T. et al. 2017. Systematic literature review on the impacts of agile release engineering practices. Information and Software Technology. 86, (2017), 87–100.

[95] Kaur, A. and Vig, V. 2019. On understanding the release patterns of open source java projects. Advances in Intelligent Systems and Computing. 711, (2019), 9–18.

[96] Kerzazi, N. and Robillard, P. 2013. Kanbanize the release engineering process. 2013 1st International Workshop on Release Engineering, RELENG 2013 - Proceedings (2013), 9–12.

[97] Khomh, F. et al. 2015. Understanding the impact of rapid releases on software quality. Empirical Software Engineering. 20, 2 (2015), 336–373.

[98] Khomh, F. et al. 2012. Do faster releases improve software quality?: An empirical case study of mozilla firefox. Proceedings of the 9th ieee working conference on mining software repositories (Piscataway, NJ, USA, 2012), 179–188.

[99] Kikas, R. et al. 2017. Structure and evolution of package dependency networks. 2017 IEEE/ACM 14th international conference on mining software repositories (MSR) (May 2017).

[100] Kim, C.H.P. et al. 2016. Static program analysis for identifying energy bugs in graphics-intensive mobile apps. Modeling, analysis and simulation of computer and telecommunication systems (mascots), 2016 ieee 24th international symposium on (2016), 115–124.

[101] Kim, S. et al. 2011. Dealing with noise in defect prediction. Proceedings of the 33rd international conference on software engineering (New York, NY, USA, 2011), 481–490.

[102] Kim, S. et al. 2007. Predicting faults from cached history. Proceedings of the 29th international conference on software engineering (Washington, DC, USA, 2007), 489–498.

[103] Kitchenham 2007. Guidelines for performing systematic literature reviews in software engineering. Keele University; University of Durham.

[104] Kitchenham, B. 2004. Procedures for performing systematic reviews. Keele, UK, Keele University. 33, 2004 (2004), 1–26.

[105] Kula, R.G. et al. 2017. An exploratory study on library aging by monitoring client usage in a software ecosystem. 2017 IEEE 24th international conference on software analysis, evolution and reengineering (SANER) (Feb. 2017).

[106] Kula, R.G. et al. 2017. Do developers update their library dependencies? Empirical Software Engineering. 23, 1 (May 2017), 384–417.

[107] Laukkanen, E. et al. 2017. Problems, causes and solutions when adopting continuous delivery—A systematic literature review. Information and Software Technology. 82, (2017), 55–79.

[108] Laukkanen, E. et al. 2018. Comparison of release engineering practices in a large mature company and a startup. Empirical Software Engineering. (2018), 1–43.

[109] Lee, T. et al. 2011. Micro interaction metrics for defect prediction. Proceedings of the 19th acm sigsoft symposium and the 13th european conference on foundations of software engineering (New York, NY, USA, 2011), 311–321.

[110] Lessmann, S. et al. 2008. Benchmarking classification models for software defect prediction: A proposed framework and novel findings. IEEE Transactions on Software Engineering. 34, 4 (2008), 485–496.

[111] Leung, H.K. and Lui, K.M. 2015. Testing analytics on software variability. Software analytics (swan), 2015 ieee 1st international workshop on (2015), 17–20.

[112] Lewis, C. et al. 2013. Does bug prediction support human developers? Findings from a Google case study. 2013 35th international conference on software engineering (icse) (May 2013), 372–381.

[113] Li, D. and Halfond, W.G. 2014. An investigation into energy-saving programming practices for android smartphone app development. Proceedings of the 3rd international workshop on green and sustainable software (2014), 46–53.

[114] Li, S. et al. 2017. Crowdsourced app review manipulation. Proceedings of the 40th international acm sigir conference on research and development in information retrieval (2017), 1137–1140.

[115] Li, Y. et al. 2017. Mining user reviews for mobile app comparisons. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies. 1, 3 (2017), 75.

[116] Liu, Y. et al. 2017. NavyDroid: Detecting energy inefficiency problems for smartphone applications. Proceedings of the 9th asia-pacific symposium on internetware (2017), 8.

[117] Lungu, M. 2009. Reverse engineering software ecosystems. University of Lugano.

[118] Malhotra, R. 2015. A systematic review of machine learning techniques for software fault prediction. Applied Soft Computing. 27, C (Feb. 2015), 504–518.

[119] Malloy, B.A. and Power, J.F. 2018. An empirical analysis of the transition from python 2 to python 3. Empirical Software Engineering. (Jul. 2018).

[120] Malloy, B.A. and Power, J.F. 2017. Quantifying the transition from python 2 to 3: An empirical study of python applications. 2017 ACM/IEEE international symposium on empirical software engineering and measurement (ESEM) (Nov. 2017).

[121] Manikas, K. 2016. Revisiting software ecosystems research: A longitudinal literature study. Journal of Systems and Software. 117, (Jul. 2016), 84–103.

[122] Marsavina, C. et al. 2014. Studying fine-grained co-evolution patterns of production and test code. 2014 ieee 14th international working conference on source code analysis and manipulation (Sept 2014), 195–204.

[123] Martin, W. et al. 2017. A survey of app store analysis for software engineering. IEEE transactions on software engineering. 43, 9 (2017), 817–847.

[124] Matsumoto, S. et al. 2010. An analysis of developer metrics for fault prediction. Proceedings of the 6th international conference on predictive models in software engineering (New York, NY, USA, 2010), 18:1–18:9.

[125] Mäntylä, M.V. et al. 2015. On rapid releases and software testing: A case study and a semi-systematic literature review. Empirical Software Engineering. 20, 5 (2015), 1384–1425.

[126] McDonnell, T. et al. 2013. An empirical study of API stability and adoption in the android ecosystem. 2013 IEEE international conference on software maintenance (Sep. 2013).

[127] Mcilroy, S. 2014. Empirical studies of the distribution and feedback mechanisms of mobile app stores.

[128] McIlroy, S. et al. 2017. Is it worth responding to reviews? Studying the top free apps in google play. IEEE Software. 34, 3 (2017), 64–71.

[129] McIntosh, A. et al. 2018. What can android mobile app developers do about the energy consumption of machine learning? Empirical Software Engineering. (2018), 1–40.

[130] McIntosh, S. et al. 2016. An empirical study of the impact of modern code review practices on software quality. Empirical Software Engineering. 21, 5 (2016), 2146–2189.

[131] McIntosh, S. et al. 2014. The impact of code review coverage and code review participation on software quality: A case study of the qt, vtk, and itk projects. Proceedings of the 11th working conference on mining software repositories (2014), 192–201.

[132] Mens, T. et al. 2013. Studying evolving software ecosystems based on ecological models. Evolving software systems. Springer Berlin Heidelberg. 297–326.

[133] Menzies, T. and Zimmermann, T. 2013. Software analytics: So what? IEEE Software. 30, 4 (July 2013), 31–37.

[134] Messerschmitt, D.G. and Szyperski, C. 2003. Software ecosystem: Understanding an indispensable technology and industry (mit press). The MIT Press.

[135] Mirzaaghaei, M. et al. 2012. Supporting test suite evolution through test case adaptation. 2012 ieee fifth international conference on software testing, verification and validation (April 2012), 231–240.

[136] Moiz, S.A. 2017. Uncertainty in software testing. Trends in software testing. Springer. 67–87.

[137] Moser, R. et al. 2008. A comparative analysis of the efficiency of change metrics and static code attributes for defect prediction. Proceedings of the 30th international conference on software engineering (New York, NY, USA, 2008), 181–190.

[138] Moura, I. et al. 2015. Mining energy-aware commits. Proceedings of the 12th working conference on mining software repositories (2015), 56–67.

[139] Mujahid, S. et al. 2017. Examining user complaints of wearable apps: A case study on android wear. Mobile software engineering and systems (mobilesoft), 2017 ieee/acm 4th international conference on (2017), 96–99.

[140] Ni, A. and Li, M. 2018. ACONA: Active online model adaptation for predicting continuous integration build failures. Proceedings of the 40th international conference on software engineering: Companion proceeedings (2018), 366–367.

[141] Noor, T.B. and Hemmati, H. 2015. Test case analytics: Mining test case traces to improve risk-driven testing. Software analytics (swan), 2015 ieee 1st international workshop on (2015), 13–16.

[142] Oliveira, W. et al. 2017. A study on the energy consumption of android app development approaches. Mining software repositories (msr), 2017 ieee/acm 14th international conference on (2017), 42–52.

[143] Palomba, F. et al. 2018. Crowdsourcing user reviews to support the evolution of mobile apps. Journal of Systems and Software. 137, (2018), 143–162.

[144] Palomba, F. et al. 2017. Recommending and localizing change requests for mobile apps based on user reviews. Proceedings of the 39th international conference on software engineering (2017), 106–117.

[145] Pang, C. et al. 2016. What do programmers know about software energy consumption? IEEE Software. 33, 3 (2016), 83–89.

[146] Panichella, S. et al. 2016. Ardoc: App reviews development oriented classifier. Proceedings of the 2016 24th acm sigsoft international symposium on foundations of software engineering (2016), 1023–1027.

[147] Pereira, R. et al. 2018. JStanley: Placing a green thumb on java collections. Proceedings of the 33rd acm/ieee international conference on automated software engineering (2018), 856–859.

[148] Perenson, M. 2008. Google launches android market. https://www.pcworld.com/article/152613/google_android_ships.html.

[149] Pinto, G. and Rebouças, F.C.R.B.M. 2018. Work practices and challenges in continuous integration: A survey with travis ci users. (2018).

[150] Pinto, G. et al. 2014. Mining questions about software energy consumption. Proceedings of the 11th working conference on mining software repositories (2014), 22–31.

[151] Pinto, L.S. et al. 2013. TestEvol: A tool for analyzing test-suite evolution. Proceedings - international conference on software engineering (2013), 1303–1306.

[152] Pinto, L.S. et al. 2012. Understanding myths and realities of test-suite evolution. Proceedings of the acm sigsoft 20th international symposium on the foundations of software engineering (2012), 33.

[153] Plewnia, C. et al. 2014. On the influence of release engineering on software reputation. Mountain view, ca, usa: In 2nd international workshop on release engineering (2014).

[154] Poo-Caamaño, G. 2016. Release management in free and open source software ecosystems.

[155] Radjenović, D. et al. 2013. Software fault prediction metrics. Information and Software Technology. 55, 8 (Aug. 2013), 1397–1418.

[156] Raemaekers, S. et al. 2017. Semantic versioning and impact of breaking changes in the maven repository. Journal of Systems and Software. 129, (Jul. 2017), 140–158.

[157] Rahman, F. and Devanbu, P. 2013. How, and why, process metrics are better. 2013 35th international conference on software engineering (icse) (May 2013), 432–441.

[158] Rahman, F. et al. 2011. BugCache for inspections: Hit or miss? Proceedings of the 19th acm sigsoft symposium and the 13th european conference on foundations of software engineering (New York, NY, USA, 2011), 322–331.

[159] Rajlich, V. 2014. Software evolution and maintenance. Proceedings of the on future of software engineering - FOSE 2014 (2014).

[160] Rausch, T. et al. 2017. An empirical analysis of build failures in the continuous integration workflows of java-based open-source software. Proceedings of the 14th international conference on mining software repositories (2017), 345–355.

[161] Robbes, R. et al. 2012. How do developers react to API deprecation? Proceedings of the ACM SIGSOFT 20th international symposium on the foundations of software engineering - FSE 12 (2012).

[162] Robinson, B. et al. 2011. Scaling up automated test generation: Automatically generating maintainable regression unit tests for programs. 2011 26th ieee/acm international conference on automated software engineering (ase 2011) (Nov. 2011), 23–32.

[163] Rodríguez, P. et al. 2017. Continuous deployment of software intensive products and services: A systematic mapping study. Journal of Systems and Software. 123, (2017), 263–291.

[164] Romano, S. et al. 2017. Findings from a multi-method study on test-driven development. Information and Software Technology. 89, (2017), 64–77.

[165] Saborido, R. et al. 2018. An app performance optimization advisor for mobile device app marketplaces. Sustainable Computing: Informatics and Systems. (2018).

[166] Santolucito, M. et al. 2018. Statically verifying continuous integration configurations. arXiv preprint arXiv:1805.04473. (2018).

[167] Schneidewind, N.F. 2007. Risk-driven software testing and reliability. International Journal of Reliability, Quality and Safety Engineering. 14, 2 (2007), 99–132.

[168] Scoccia, G.L. et al. 2018. An investigation into android run-time permissions from the end users’ perspective. (2018).

[169] Shamshiri, S. et al. 2018. How do automatically generated unit tests influence software maintenance? Software testing, verification and validation (icst), 2018 ieee 11th international conference on (2018), 250–261.

[170] Shepperd, M. et al. 2014. Researcher bias: The use of machine learning in software defect prediction. IEEE Transactions on Software Engineering. 40, 6 (June 2014), 603–616.

[171] Shimagaki, J. et al. 2016. A study of the quality-impacting practices of modern code review at sony mobile. Software engineering companion (icse-c), ieee/acm international conference on (2016), 212–221.

[172] Souza, R. et al. 2015. Rapid releases and patch backouts: A software analytics approach. IEEE Software. 32, 2 (2015), 89–96.

[173] Stallman, R. 2002. Free software, free society: Selected essays of richard m. stallman. Lulu. com.

[174] State of the union: Npm: 2016. https://www.linux.com/news/event/Nodejs/2016/state-union-npm. Accessed: 2018-10-11.

[175] Statista 2018. Number of apps available in leading app stores as of 1st quarter 2018. https://www.statista.com/statistics/276623/number-of-apps-available-in-leading-app-stores.

[176] Stolberg, S. 2009. Enabling agile testing through continuous integration. Agile conference, 2009. agile’09. (2009), 369–374.

[177] Teixeira, J. 2017. Release early, release often and release on time. an empirical case study of release management. Open source systems: Towards robust practices (Cham, 2017), 167–181.

[178] Teixeira, J. et al. 2015. Lessons learned from applying social network analysis on an industrial free/libre/open source software ecosystem. Journal of Internet Services and Applications. 6, 1 (Jul. 2015).

[179] The npm blog: Kik, left-pad and npm: 2016. https://blog.npmjs.org/post/141577284765/kik-left-pad-and-npm. Accessed: 2018-10-15.

[180] The redmonk programming language rankings: January 2018: 2018. https://redmonk.com/sogrady/2018/03/07/language-rankings-1-18/. Accessed: 2018-10-11.

[181] Thongtanunam, P. et al. 2017. Review participation in modern code review. Empirical Software Engineering. 22, 2 (2017), 768–817.

[182] Thongtanunam, P. et al. 2016. Revisiting code ownership and its relationship with software quality in the scope of modern code review. Proceedings of the 38th international conference on software engineering (2016), 1039–1050.

[183] Thongtanunam, P. et al. 2015. Who should review my code? A file location-based code-reviewer recommendation approach for modern code review. Software analysis, evolution and reengineering (saner), 2015 ieee 22nd international conference on (2015), 141–150.

[184] Thongtanunam, P. et al. 2014. Reda: A web-based visualization tool for analyzing modern code review dataset. Software maintenance and evolution (icsme), 2014 ieee international conference on (2014), 605–608.

[185] Trockman, A. 2018. Adding sparkle to social coding. Proceedings of the 40th international conference on software engineering companion proceeedings - ICSE 18 (2018).

[186] Vasilescu, B. et al. 2014. Continuous integration in a social-coding world: Empirical evidence from github. Software maintenance and evolution (icsme), 2014 ieee international conference on (2014), 401–405.

[187] Vassallo, C. et al. 2018. Un-break my build: Assisting developers with build repair hints. (2018).

[188] Vassallo, C. et al. 2017. A tale of ci build failures: An open source and a financial organization perspective. Software maintenance and evolution (icsme), 2017 ieee international conference on (2017), 183–193.

[189] Vernotte, A. et al. 2015. Risk-driven vulnerability testing: Results from eHealth experiments using patterns and model-based approach.

[190] Wang, H. et al. 2018. Why are android apps removed from google play?: A large-scale empirical study. Proceedings of the 15th international conference on mining software repositories (2018), 231–242.

[191] Wang, S. et al. 2016. Automatically learning semantic features for defect prediction. 2016 ieee/acm 38th international conference on software engineering (icse) (May 2016), 297–308.

[192] Wei, L. et al. 2017. OASIS: Prioritizing static analysis warnings for android apps based on app user reviews. Proceedings of the 2017 11th joint meeting on foundations of software engineering (2017), 672–682.

[193] Widder, D.G. et al. 2018. I’im leaving you, travis: A continuous integration breakup story. (2018).

[194] Xie, Z. et al. 2016. You can promote, but you can’t hide: Large-scale abused app detection in mobile app stores. Proceedings of the 32nd annual conference on computer security applications (2016), 374–385.

[195] Yang, X. et al. 2016. Mining the modern code review repositories: A dataset of people, process and product. Proceedings of the 13th international conference on mining software repositories (2016), 460–463.

[196] Zaidman, A. et al. 2011. Studying the co-evolution of production and test code in open source and industrial developer test processes through repository mining. Empirical Software Engineering. 16, 3 (2011), 325–364.

[197] Zampetti, F. et al. 2017. How open source projects use static code analysis tools in continuous integration pipelines. Mining software repositories (msr), 2017 ieee/acm 14th international conference on (2017), 334–344.

[198] Zanjani, M.B. et al. 2016. Automatically recommending peer reviewers in modern code review. IEEE Transactions on Software Engineering. 42, 6 (2016), 530–543.

[199] Zhang, D. et al. 2011. Software analytics as a learning case in practice: Approaches and experiences. Proceedings of the international workshop on machine learning technologies in software engineering (New York, NY, USA, 2011), 55–58.

[200] Zhao, Y. et al. 2017. The impact of continuous integration on other software development practices: A large-scale empirical study. Proceedings of the 32nd ieee/acm international conference on automated software engineering (2017), 60–71.

[201] Zimmermann, T. et al. 2009. Cross-project defect prediction: A large scale experiment on data vs. domain vs. process. Proceedings of the the 7th joint meeting of the european software engineering conference and the acm sigsoft symposium on the foundations of software engineering (New York, NY, USA, 2009), 91–100.