From the Maintainers call on the 14th of Sept we spoke briefly about Reporting - this then led into a larger discussion about what reporting means for each adopter of Hypha and a decision to hold a space (meeting and async space) to understand ‘Reporting’ better
The points from Maintainers call on the 14th read as follows:
Reporting functionality hoping to have updates and wireframes next week (this is the monitor and evaluation type of reporting)
Need to decide on terms for reporting on system analytics, manager kpi, and partner monitor and evaluation?
@eriol will take on working with folks to define this
Doing some research with @rosy and her team on workflow changes
Eriol’s proposal:
We use a space in the Adoptors meeting on the 22nd (and any subsequent adoptors meetings) to begin to define ‘Reporting’
We hold additional meeting time if we do not resolve clarity around reporting
Eriol creates a Miro board to collect and collate all information that can also be added to asynchronously
We stop when we feel we have a ‘good enough’ amount of info to move forward with reporting
Is the above proposal understandable or require any clarifications?
What are peoples quick reactions to the above proposal?
Does anyone have any critical concerns about the above proposal?
Is it good enough or safe enough to try for now?
Would accepting this proposal hinder us working together to meet our stated aims?
How can we ‘measure’ the concern?
Please note the links below to other reporting conversations (and add any additional links around reporting in Hypha in comments to this thread)
In the Maintainers call on the 28th of September 2021 we briefly discussed one part of reporting from the perspective of applicants/grantees reporting to grant-giving organisations/funders. We briefly talked about who and how grant-giving organisations/funders report to (e.g. government, donors etc.)
We’d like to collect responses from adopters on the following questions before the next adopter call at the end of October 2021.
Can the design and product team get responses to these either written or recorded using a quick screen recording (https://www.loom.com/ is good!) by Thursday October 14th 2021 please.
What kinds of reports do you currently ask for from applicants? (e.g. written long form, written short, automated surveys via SMS or email etc.)
What kinds of questions do the applicants answer in these reports and to what extent do you value different kinds of questions/responses? (e.g. quantitative, qualitative and you value each equally)
What kinds of methods or tools do you either ask applicants to use to send reports or do applicants request to use to send you reports? (e.g.attachment files on emails, SMS on Signal etc.)
What kind of follow-up do you do as a funder to reports from applicants? (e.g. we offer support, sign post for support from other places etc.)
Tagging Adopters to respond to the questions when they can before the deadline. We briefly talked about to who and how do grant-giving organisations/funders report to but for now, we’ll focus on the applicants to grant-giving organisations/funders.
We currently have templates prepared in word (for narrative reporting) and excel (for financial reporting). There are several different types of narrative template depending on the type of project and the stage of the project, with the most detail being asked at the end of the project. The grantee fills out an answer to each question and returns the form. Alongside the narrative reports, we have what we call an “outcomes log”, where grantees can add different changes/impacts that have taken place throughout the project and list evidence for each impact.
Heavily waited towards qualitative and focused on impact, rather than activities and outputs. We want to hear about progress towards objectives, what kind of outcomes/impacts were achieved, lessons learned, types of collaboration, suggestions for us and others, next steps after the project, etc. Quantitative information can sometimes help to support this, but is not requested specifically (like number of people who signed onto a campaign, etc).
Up until now, we upload the templates into a password-protected Nextcloud folder (though we plan to start using the projects space on Hypha for this). The grantee then completes the reports and uploads them into the same folder for us to download and copy into our internal folders. We also use email and Signal sometimes for communications and follow up questions about reports, but prefer to try and keep most of that type of information out of our emails, and keep email attachments to a minimum.
Provide direct feedback and ask questions over email or a call, and provide suggestions/support for next steps. Update the case study page on our website, and attach relevant documents and links. Post blogs by the grantee on our website. Feed any results and outcomes into a central outcomes log we keep of all outcomes achieved by our grantees. This is used to measure the overall impact of our funding, measure progress towards our own goals and commitments to our donors, and to feed into reporting to our own donors (who are currently all foundations).
Hope that helps and happy to answer any follow up questions.
Thomas
Can the design and product team get responses to these
I have realized in reading @eriol’s message that I cannot confidently say who comprises this team.
My uncertainty here makes me think we need to do a better job documenting who is responsible for different aspects of Hypha’s development. Some of the info exists on the Hypha main page and the main documentation page, but we can make more explicit the division of labor (in the sense of saying “here’s who to contact if you have questions about:”).
It might also be a good idea to list the means by which we can contact each of those folks (perhaps a link to WeChat and their handle on the platform).
Questions for folks:
Do you agree that we should make this more explicit to the community? (For ease of generating consensus, please like the post if you agree)
Where should it go (main page only, main page + main docs page)?
Once we have answers to those two questions, I can create GitHub tickets in the relevant repository/repositories to begin the process.
What kinds of reports do you currently ask for from applicants? (e.g. written long form, written short, automated surveys via SMS or email etc.)
Every 6 months we do an informal check in via a Zoom, phone call, or email. In this case the report end product is my notes on the call or the text of the email update.
Every project is different, and we ask for different things at the 1-year mark. For example, a large satellite project may require a written report on each deliverable, whereas a scholarship grant may (a) be done already and (b) just need a list of recipients, where they went to school, etc.
What kinds of questions do the applicants answer in these reports and to what extent do you value different kinds of questions/responses? (e.g. quantitative, qualitative and you value each equally)
Question type value depends on the project, so for the purposes of building reporting in Hypha, both are equally valuable.
For example, for scholarships, the number and amount of scholarships distributed by our partners is the most important measure of success. For a project where a club is putting up a repeater antenna, describing that they successfully installed the antenna is the most important part.
What kinds of methods or tools do you either ask applicants to use to send reports or do applicants request to use to send you reports? (e.g.attachment files on emails, SMS on Signal etc.)
Attachments on email is fine, but we prefer to use a form (which right now is in Airtable). The ability to upload attachments is helpful for photos and people who have long publications they want to upload.
What kind of follow-up do you do as a funder to reports from applicants? (e.g. we offer support, sign post for support from other places etc.)
We use reports for:
Publicizing information in our annual report, in blog posts, and at public meetings where we share about the work we’ve been doing. Please note that we ask specifically for information that can be shared publicly. This may be a subset of the report, or the whole thing, or none of it.
We use internal report info about lessons learned and challenges to talk with our board and reviewers about which projects need additional support, or which types of projects we should think about funding in the future
We would like to use reports to advise new applicants about what makes a project successful and common pitfalls
@allan when you have a chance do you think you could answer these questions from a DDP pov?
What kinds of reports do you currently ask for from applicants? (e.g. written long form, written short, automated surveys via SMS or email etc.)
What kinds of questions do the applicants answer in these reports and to what extent do you value different kinds of questions/responses? (e.g. quantitative, qualitative and you value each equally)
What kinds of methods or tools do you either ask applicants to use to send reports or do applicants request to use to send you reports? (e.g.attachment files on emails, SMS on Signal etc.)
What kind of follow-up do you do as a funder to reports from applicants? (e.g. we offer support, sign post for support from other places etc.)
@blah You mentioned in maintainers last week you had a few reports centric issues you wanted to link for me ahead of the feasibility review meeting [Link pending] for reporting
Hi @Adopters and @Implementers, here’s a video (it lasts about 20 minutes) covering the reporting feature idea, the concepts, how it could work.
The video covers:
What do we mean by reporting?
Reporting on project impact and progress
Using the terms “reporting” and “project update”
Fundees Reporting To The Funders
Funders Reporting To Their Funders
Funders Reporting Internally
Hypha System Performance
What we’re focusing on first
Project update concepts
Areas of focus
Hypotheses
Meeting the funders goals/missions
Mapping the deliverables to area of focus mapping
How the reporting process could work:
What is the purpose of the update?
Who is making the update?
How is the update being made?
Who can read/see the update?
What is the update frequency?
When does the update process start?
How the evaluating/visualising reporting information aspect could work
Visualising project impact and progress 1) individually and 2) on aggregate
Thoughts, opinions, constructive criticism and feedback on what’s covered. I’m experimenting with the format, and the way I’m communicating this work, so if let me know if it does/doesn’t work for you.
Thanks @bernard ! This is broadly in line with what DFF is doing so if we can implement something similar in Hypha that would be great. I like the video format as we can go to it in our own time, rather than having to set aside time for a call.
Apologies for not replying to this, getting swamped with the product work.
I would like to discuss this on the next docs WG call as i think it’s a connected topic to this work but not about reporting specifically. - I’ll add to the agenda for Documentation Working Group Meeting 30th Nov
FYI @bernard Allan got back to me in a PM and said:
I have collated some answers and I’ll send them across to you by the end of this week. Its been super busy the last weeks on my end and very little to do with hypha.
So we’ll have the DDP perspective as soon as Allan gets some breathing space from day to day work
@eriol Before I can estimate the dev hours I need a discussion of what the features entails more in detail. E.G 1.2.1 can be as simple as a URL field to paste in links to other documents all the way up do a complex document handling system that can index and create links to sections in doc and pdf files.
[Triggered by @dluong’s mention of adding category questions to reporting in the reporting Miro board]
Can someone (I’m thinking @blah or @dluong or @frjo) explain the original purpose of “category questions” in Hypha? What need were they created to address?
I understand that the are questions which the applicant are required to answer, but I don’t have a full understanding of how they are used in practice.
When they are used, what is done with the answers? What can the answers be used for?
There was a need to have question of the type “Country” and “Focus” etc. that all had multiple options and that should be the same on many/all applications.
Recreating e.g. a “What country are you from” question on each application with the option to pick any of the worlds 195 countries would be cumbersome.
With “Category question” they can be setup once and then reused on any number of application.
Since they are the same on all application they can be used to collect statistics etc.