You are on page 1of 1
Summary For my ticketed work this sprint it was required to write tests to verify the correctness of messages in a conversation history with regards to ordering/grouping based on sent time, as well as correctness of the headings to indicate from which day they were sent. As testing for this needed the ability to control the timestamps of messages it was a technical requirement to implement the test as a Feature Test making use of a mocked component. It became clear to me early during this process that there were issues related to the correctness of results coming from the existing mocked component. | reached out to the Dev team on Tuesday during which it was confirmed that the complication was from the implementation of the existing mock. I then dutifully went to work to first fix the mock before working on writing the test for it. After some time at this, | realized that the work | was doing was well outside the scope of what | was initially assigned to do. Ques: ns for followup What should be the appropriate process to handle this situation? While this doesn't block my ability to write a test for the component, I've no way to verify the correctness of the test without valid data to test against. As this is a feature test, ifit were to be merged it would need to be marked as ‘pending’ as, even if it were fully correct, it would properly fail due to incorrect data and block the pipeline as a failing feature test In addition, while I've confidence in my ability to write accurate tests, 've no way to be 100% sure and | do not ‘wish to sign my name on any work that can not be verified as being correct and complete. [also think it would be quite inefficient in this scenario to mark my ticket as “impossible to complete,” create a new ticket for the next sprint to address fixing the underlaying component, and then revisit the initial ticket again in the following sprint. This represents a process delay of 5-6 weeks.