The next step is to prepare for the test. our activity goals are completed.
First of all, we need to confirm the delivery time of the test version installation package with the product development students. In the previous practice process, because the activity group has been established at the time of registration, the installation package can be sent to the group to guide the users participating in the activity to download and install.
During the installation process, it should be noted that iOS users need to download the official test Testflight App to participate in the test, and Android users need to pay attention to whether there is a reminder on the mobile phone during the installation process that "jump to the official market for installation, this version has not passed the verification". Direct the user to skip these prompts and make sure the user is installing the participating version.
So how to guide users to test the newly launched functions? I have taken two approaches here.
Because it is a K12 live teaching product, for new functions that can only be tested by live broadcast, it can coordinate with internal colleagues to simulate the teacher's real class situation; after the live broadcast time is set, the internal test users will be notified in time to reserve time to participate. , and then guide the operation of the internal test user during the live broadcast. It should be noted here that if users are guided to participate in the test through the live broadcast, it is necessary to describe how to operate, the results of the operation, and how to record the problem in the live broadcast, so as to facilitate subsequent problem tracing.
In addition to the non-live class functions, you can make a functional operation description document in the form of screenshot markup, screen recording or gif diagram or flowchart, and guide users to operate in the form of pictures, texts and videos, so as to expose problems.
Due to the difference between the internal test version and the official version, in order to avoid the impact of the internal test activities on the feedback data of the official version, it is necessary to select an independent entrance for collecting user feedback in the process of problem collection.
In the previous activities, the method I used was to collect questionnaires. Users who participated in the activities were asked to fill in their account numbers, the time when they encountered the problem, and the corresponding screenshots or reproduced screen recordings, and then cooperated with the product’s buried point data. Basically meet the requirements of the positioning problem.
After receiving the feedback from the internal test users, the problems described by the users and the emergency procedures of the problems should be sorted. In order to calculate the frequency of the same type of feedback, when arranging user feedback, in addition to matching the user's basic information (account, device type, etc.) with the feedback information (problem description + screenshot), the problem described by the user can also be Substitute different types of words so that the magnitude of the same question can be counted in the end; for the feedback content that is not understood or uncertain, it is necessary to confirm with the user country email list twice, and do not speculate.
After sorting out the problems reported by users, you can send the documents to the product development students in charge of the project for follow-up.
In the document, it is necessary to evaluate whether it is a bug and the follow-up repair plan for the problems that affect the use of user feedback; for user demand suggestions, it is necessary to evaluate whether subsequent iterations and iteration plans are required.
The analysis of the results of the internal testing activities helps us evaluate the effect and quality of each activity, which has the following two dimensions:
Number of people: target number of recruits, actual number of recruits, number of people who installed the beta version, and number of people who gave feedback;
Feedback quality: total activity feedback, valid feedback, bug feedback, and demand suggestion feedback.