Bolivia Data

For the Quechua community in southwestern Bolivia, the reduction of the rainy season from five to two months and the flooding and erosion it brings result in a loss of crops and livestock.


This is a joint post with Maren Deepwell (cross-posted here). If you have missed our earlier posts we encourage you to revisit the beginning of the story of how we, as senior staff, lead our organisation to adopt virtual operations.


April


This month we reflect on the first 3 months operating as a virtual team, delivering our first few big milestones and look back at the OER18 conference.


Maren: It’s been three months now since we transitioned to operating as a virtual team. Looking at the bigger picture, I feel we are on the right track [cue celebratory sounds here]. A lot of what we had planned and prepared for is working and with some of this year’s key deliverables successfully achieved, we have evidence that we are achieving what we need to. As an organisation we changed more than the way we operate over the past three months as we started to employ staff directly at the same time. For me, that part of the transition had to take priority over everything else. Payroll, pensions, tax and HR had to be in steady state, putting staff welfare, support and recruitment first. Those were the things I really worried about. So it’s a relief to have managed some of the biggest risks successfully and for more and more of the way we operate as a new employer to happen in steady state. When I reviewed our progress recently I realised that whilst I was focused on the transition, we have also made a lot of strategic progress – including delivering one of the largest events in our calendar, the OER18 Conference. What’s your perspective now that the event is successfully behind us?


Martin: For me, I was interested in how the runup to OER18 conference would work out. This was our first big event where the entire team were distributed. With multi-day events I think it’s hard to appreciate just how much goes into them if you haven’t organised one yourself. Even in a digital age various things need to go to print, you’ve got material and equipment that needs to be delivered and there are some practical things like getting conference badges prepared. As part of our distributed organisation we now have distributed resources which needs some extra logistical planning to bring together at the venue. Overall this aspect went well but one of our couriers let us down and failed to pickup a next day delivery. As it happens the pickup was for some extra banners which in the end we could get away with not taking. Fortunately the bulk of material was handled by our regular courier who we have a long standing relationship with. The challenge now for us as a distributed organisation is we need to develop relationships with additional service providers, either out of doing things differently or because we are all not in the same place. This takes time and effort which I think should be factored in if you are thinking about moving your organisation to a distributed or virtual structure. Has OER18 highlighted anything for you we should take a new approach on?


Maren: Yes, I think it has and it feels timely. The work on the conference felt like a useful way for us to pull together as a team and work together virtually. It helped highlight which parts of our new virtual set up are working well and it also made me realise how much has changed in a short space of time. Because the change has been strategic and welcome, it’s felt inspiring and positive overall, but we mustn’t forget that change takes time. Our expectations of what we want to achieve are high and sometimes I have to remind myself that it takes a lot of doing to put in place new ways of doing things. Doing something once isn’t enough. In my mind the kinds of considerations you are talking about, like building new relationships with suppliers, adapting what we do and how we do it, are helpful to me now – and to us as a team, but they wouldn’t have been a few months ago because there was no capacity to handle any more novelty. Now, when we evaluate the OER conference, I feel we have the capacity to put what we’ve learnt to good use in the run up for the next event. Another aspect of running an event as a distributed team I am thinking about is meeting up beforehand. For example, we had an evening to get ready this time, but not many hours at the venue. I feel this is particularly important for members of the team who are new and haven’t been part of delivering an event with us. As we are recruiting at the moment, I am considering that with new members of staff joining soon, ahead of a bigger conference I might value more time. What are your thoughts on this?


Martin: Interesting point about time considerations. Knowing our Annual Conference is our next big event where we have at least 4 times more delegates it’s going to be important to factor in some of the practicality of badge stuffing and conference material gathering. Something that I only considered after OER18 was that we could do more to distribute our printing, both in terms of when and where it is done. For example, as part of our move to a distributed organisation we limited the purchase of printers to just getting one for our Finance Officer. I already use Google Cloud Print at home, which essentially lets you turn any printer into a network printer. Adding our printer to Google Cloud Print would allow us to share the printer with our team. There is still a logistic issue of getting materials we print to events but at least they would be in one location where we also have access to a reliable courier. The occasional printing we do is however only a small part of event delivery and in the bigger picture it would be useful to revisit our conference plan to see what we can prepare earlier to remove some of the end loading, like batching badge production. Whilst the Cloud Printing is a small example I think it reflects what you were saying about a wider change in the organisation. It feels like we have more agility in how we approach and solve problems.


Maren: I agree with that. The first few months we had our preparations to build on and then the event to focus on, now we are moving on to the next phase: doing things differently, expanding (admin) support for virtual operations and updating our plans. The conference was a good catalyst to highlight the kind of questions you raise and also our skills and competencies and the gaps in them. One of the problems in Learning Technology I come across again and again is how to build a successful organisational culture when tech, expectations, milestones etc keep shifting. I relate to that in a different way now, because as you point out, we have a lot more scope now to be flexible, to solve problems creatively rather than having to work around them. We’ve committed to being more agile and I’m discovering what that means in practice for me as an individual, for us as a team and as an organisation all over again.


Martin: Noted computer pioneer Alan Kay uses a quote from ice-hockey player Wayne Gretzky “[a] good hockey player goes to where the puck is, [a] great hockey player goes to where the puck is going to be”. If a system is in steady state everything is predictable, removing this means as an organisation we have more control about deciding where we want to be. By creating an organisational culture where there is scope for not following the puck we become more comfortable in not being in steady state, as a result more confident in finding solutions to problems as they emerge. Something I think required to make this successful is an existing confidence within the team … success breeds success.

To begin with a disclaimer. This post has information related to the new EU GDPR regulations that comes into effect on the 25 May 2018 which might be of interest to Google Apps Script and Add-on developers. I’m not a Google employee, lawyer, or a data protection expert and I’m only sharing my interpretation of information I’ve gathered for your consideration and is not legal advice. As this is a complex area the post is in two parts. This part looks at key definitions to help you find out if your G Suite Add-on or Google Apps Script project needs to consider personal data protection. The second part identifies 12 steps you can take if your add-on processes personal data.


This post was also written with Steve Webster G Suite Senior Solutions Architect and Developer at SW gApps (also not a Google employee, lawyer, or a data protection expert)



Definitions


The General Data Protection Regulation (GDPR) (EU) 2016/679 is a regulation in EU law on data protection and privacy for all individuals within the European Union. It also addresses the export of personal data outside the EU. The GDPR aims primarily to give control to citizens and residents over their personal data and to simplify the regulatory environment for international business by unifying the regulation within the EU. – Wikipedia


GDPR compliance isn’t just required by EU based organisations. Any ‘enterprise’ processing ‘personal data’ from EU citizens needs to be GDPR compliant or they can face “penalties of up to 4% of worldwide turnover or €20 million, whichever is higher”. This means if your add-on or Apps Script project has EU citizens using it you might need to comply with the GDPR.


Personal data


Before going further let’s look at the definition of ‘personal data’ which is covered in Article 4(1):


‘personal data’ means any information relating to an identified or identifiable natural person (‘data subject’); an identifiable natural person is one who can be identified, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person;


So the first question you might want to ask yourself is does your add-on use any personal data. It is important to remember here that the definition of personal data is broad and can include data that has been hashed:


Personal data that has been pseudonymised – eg key-coded – can fall within the scope of the GDPR depending on how difficult it is to attribute the pseudonym to a particular individual. – UK ICO Key Definitions


For example, within Google Apps Script you can use .getTemporaryActiveUserKey(), which is a temporary key that is unique to the active user but does not reveal the user identity. Using this by itself would probably not fall within the GDPR as it is pseudonymised, but if this is combined with other data like location you could argue that it is indirectly identifying a person and if this person was an EU citizen you would need to be GDPR compliant.


Data controller/Data processor


Within the context of add-ons depending on how personal data is being used you might also be a ‘data processor’ and/or ‘data controller’. The UK ICO defines controllers and processors as:


A controller determines the purposes and means of processing personal data.


A processor is responsible for processing personal data on behalf of a controller.


The Irish Data Protection Commissioner provides further guidance on how to identify if you are a data controller:


In essence, you are a data controller if you can answer YES to the following question:


Do you keep or process any information about living people? – IE DPC



For example, if your add-on or Apps Script project in some way lets you collect and store user email addresses you would be a data controller.


Let’s consider another example where you as a developer created a Google Sheets add-on which allows email addresses entered by the user to be used to send emails from their Gmail account. In this scenario the add-on only uses Google services available in Google Apps Script to programmatically read data from the user’s Google Sheet and send emails from the user’s Gmail account. At no point in this process would the developer have direct access to the user’s Google Sheet or Gmail account and this functionality would be entirely executed within Google services and servers. Is the developer the ‘data processor’?


I would argue that if at no point the developer accesses personal data entered by the user that they are not a ‘data processor’. The distinction is important because:


If you are a processor, the GDPR places specific legal obligations on you; for example, you are required to maintain records of personal data and processing activities. You will have legal liability if you are responsible for a breach – (UK ICO – Key definitions)


If your add-on interacts with the users data in a way that it’s accessible to you e.g. temporarily storing personal data like an email address in a Firebase Realtime Database, I would argue this does make you a ‘data processor’ and for data from EU citizens places GDPR requirements on you. But as mentioned I’m not a lawyer and you might want to seek further advice on that.


Enterprise and economic activity


You might argue that you provide your add-on for free or you are not incorporated as a legal entity so regardless of whether you are a data controller or processor you are exempt from the GDPR. Article 4(18) provides a definition of ‘enterprise’ which is:


a natural or legal person engaged in an economic activity, irrespective of its legal form, including partnerships or associations regularly engaged in an economic activity


Your next question might be what is defined as “economic activity”. Robert Madge writes on MyData:


economic activity is ‘offering goods or services’ (even if no payment occurs). The case law shows a broad interpretation of ‘offering goods or services’ to cover sales, supply and even purchasing.


So even if your add-on is free you are offering goods or services and therefore, with EU citizens, you need to be GDPR compliant. An exception to this is if processing is carried out by individuals purely for personal/household activities as covered in Recital 18, however, even if your add-on purely personal or household activity the GDPR still applies “to controllers or processors which provide the means for processing personal data for such personal or household activities”


Coverage from Google Privacy Policy


I had a look at the updated Google Privacy Policy which covers services like Google Drive so see if this provided any cover for third party developers. For example, Google uses Google Analytics in it’s services covered in it’s privacy policy and I wondered if you used Google Analytics in your add-on you would be covered by the same policy. Google’s privacy policy however states that:


This Privacy Policy doesn’t apply to … services offered by other companies or individuals, including products or sites that may include Google services


Next steps…


If you’ve concluded from the information provided that your add-on is processing personal data from EU citizens you can find out 12 steps to take now.


As noted at the start I’m not a lawyer or a data protection expert so if you have any corrections or additional information please share in the comments or get in touch.


 

To begin with a disclaimer. This post contains information related to the new EU GDPR regulations that comes into effect on the 25 May 2018 which might be of interest to Google Apps Script and Add-on developers. I’m not a Google employee, lawyer, or a data protection expert and I’m only sharing my interpretation of information I’ve gathered for your consideration and is not legal advice. As this is a complex area the post is in two parts. This part is the second in the series. In part one I looked at key definitions to help you identify if your G Suite Add-on or Google Apps Script project needs to consider personal data protection. If from that post you concluded your add-on or Apps Script project needs to add personal data protection this post identifies 12 steps you can take now.


This post was also written with Steve Webster G Suite Senior Solutions Architect and Developer at SW gApps (also not a Google employee, lawyer, or a data protection expert).



Limiting access


In the previous post I looked how GDPR provides “data protection and privacy for all individuals within the European Union” including data from EU citizens even if it is used outside the EU. One option is to avoid the GDPR by preventing EU citizens using it. You could do this by selecting the regions when you publish your add-on. One issue with this is not all EU countries are individually listed (Croatia, Republic of Cyprus, Latvia, Luxembourg, Malta and Slovenia are not listed). I’m not sure if for example you just selected ‘United States’ this would prevent all EU citizens accessing your add-on. Another consideration is if you are updating publication settings for an existing add-on, does this prevent existing users from the EU from continuing to use your add-on?



Embracing GDPR


Another option if you use personal data in your add-on is to use the GDPR as an opportunity to improve your data handling and transparency. A number of services based outside the EU have incorporated aspects of the GDPR in their privacy policies and working practices.


A good starting point is the UK ICO Preparing for the GDPR – 12 steps to take now (.pdf). The document contains more details on each of these steps and an annotated extract is contained below:


Preparing your add-on for GDPR – 12 steps


Awareness


You should make sure that decision makers and key people in your organisation are aware that the law is changing to the GDPR. They need to appreciate the impact this is likely to have.


Hopefully this post is proving a useful starting point.


Information you hold


You should document what personal data you hold, where it came from and who you share it with. You may need to organise an information audit.


For the majority of add-ons I’d imagine the personal data is limited so this should take too long. For EU based developers this is something you should consider doing for your entire company.


Lawful basis for processing personal data


You should identify the lawful basis for your processing activity in the GDPR, document it and update your privacy notice to explain it.


The GDPR defines six lawful bases for processing personal data of which at least one has to be used when processing personal data from EU citizens:



  • Consent: the individual has given clear consent for you to process their personal data for a specific purpose.

  • Contract: the processing is necessary for a contract you have with the individual, or because they have asked you to take specific steps before entering into a contract.

  • Legal obligation: the processing is necessary for you to comply with the law (not including contractual obligations).

  • Vital interests: the processing is necessary to protect someone’s life.

  • Public task: the processing is necessary for you to perform a task in the public interest or for your official functions, and the task or function has a clear basis in law.

  • Legitimate interests: the processing is necessary for your legitimate interests or the legitimate interests of a third party unless there is a good reason to protect the individual’s personal data which overrides those legitimate interests.


In an add-on you may discover you need to handle lawful basis for the different types of personal data you use. For example, if you include Google Analytics or another tracking service to monitor your add-on usage you will probably require consent from the user, but if you have premium options in your add-on you may use a contract as lawful basis.


Within the context of add-ons and Apps Script projects in terms of lawful basis other than consent and contract you might want to spend time looking at ‘legitimate interest’. The UK ICO guidance on ‘legitimate interest’ states:


Legitimate interests is most likely to be an appropriate basis where you use data in ways that people would reasonably expect and that have a minimal privacy impact. Where there is an impact on individuals, it may still apply if you can show there is an even more compelling benefit to the processing and the impact is justified.


‘Legitimate interests’ can be conveyed in your user privacy notice and might be well suited to add-ons as you could argue that by installing and using the add-on there is reasonable expectation and compelling benefits. Using ‘legitimate interests’ comes with extra responsibilities including conducting and recording a legitimate interests assessment (LIA).


Consent


You should review how you seek, record and manage consent and whether you need to make any changes. Refresh existing consents now if they don’t meet the GDPR standard.


If using consent as a lawful basis for processing personal data you need to keep a record on an individual basis. The UK ICO note that “consent requires a positive opt-in. Don’t use pre-ticked boxes or any other method of default consent”.


Communicating privacy information


You should review your current privacy notices and put a plan in place for making any necessary changes in time for GDPR implementation.


To be GDPR compliant there are a number of things you need to do regarding the data you collect, it’s handling and recording action. Google already requires all new add-ons to have a privacy policy which is an opportunity to state if you are using personal data how it is “processed lawfully, fairly and in a transparent manner”.


For existing add-ons you might want to use ‘legitimate interests’ as a lawful basis if you are processing personal data of existing users. The UK ICO’s guidance states:


If for example you have been processing on the basis of consent but you find that your existing consents do not meet the GDPR standard, and you do not wish to seek fresh GDPR-compliant consent, you may be able to consider legitimate interests instead. However you must be confident that you want to take responsibility for demonstrating that your processing is in line with people’s reasonable expectations and that it wouldn’t have an unjustified impact on them.


You must still ensure that your processing is fair. If you wish to move from consent under the 1998 Act to legitimate interests under the GDPR, you need to ensure that you clearly inform individuals of the change in your privacy notice. To ensure there is no unjustified impact on their rights, you should consider giving them a clear chance to opt out, and retaining any preference controls that were in place.


Individuals’ rights


You should check your procedures to ensure they cover all the rights individuals have, including how you would delete personal data or provide data electronically and in a commonly used format.


The GDPR gives the following rights to EU citizens regarding personal data:



  • The right to be informed

  • The right of access

  • The right to rectification

  • The right to erasure

  • The right to restrict processing

  • The right to data portability

  • The right to object

  • Rights in relation to automated decision making and profiling


An important consideration is the lawful basis you choose for processing personal data affects the rights the user has to erasure, portability and objection summarized in the table below:




Image source – ICO Lawful basis for processing


Subject access requests


You should update your procedures and plan how you will handle requests within the new timescales and provide any additional information.


The new timescale for subject access requests is one month. As noted in the UK ICO’s 12 steps:


the right to data portability is new. It only applies: to personal data an individual has provided to a controller; where the processing is based on the individual’s consent or for the performance of a contract; and when processing is carried out by automated means.


Children


You should start thinking now about whether you need to put systems in place to verify individuals’ ages and to obtain parental or guardian consent for any data processing activity.


This might be particularly important if you are developing add-ons for education.


Data breaches


You should make sure you have the right procedures in place to detect, report and investigate a personal data breach.


With data breaches the UK ICO highlight that:


the GDPR introduces a duty on all organisations to report certain types of personal data breach to the relevant supervisory authority. You must do this within 72 hours of becoming aware of the breach, where feasible.


It appears breaches should be reported to the appropriate authority in your jurisdiction. As knowing who this is isn’t always straightforward to find out you should have this documented. For US based developers here is a Summary of U.S. State Data Breach Notification Statutes (the 72 hour window applies to EU citizens data, you may discover your jurisdiction has additional requirements).


Additionally for add-on breaches you might want to contact Google directly. A contact address list on Google’s Privacy Shield is data-protection-office@google.com.


Data Protection by Design and Data Protection Impact Assessments


You should familiarise yourself now with the ICO’s code of practice on Privacy Impact Assessments as well as the latest guidance from the Article 29 Working Party, and work out how and when to implement them in your organisation.


At the heart of this is having in place the appropriate workflows and policies. Having documented workflows is useful as doing Data Protection Impact Assessments are not always required.


Data Protection Officers


You should designate someone to take responsibility for data protection compliance and assess where this role will sit within your organisation’s structure and governance arrangements. You should consider whether you are required to formally designate a Data Protection Officer.


If you are the sole developer this will be easy… Depending on the circumstances, including the country you are based in, you might have to formally register a data protection officer. In the case of the UK the ICO has a self-assessment tool.


International


If your organisation operates in more than one EU member state (ie you carry out cross-border processing), you should determine your lead data protection supervisory authority. Article 29 Working Party guidelines will help you do this.


A common myth with the GDPR is data about EU citizens can’t leave the EU. This is not true. The UK ICO guidance on the GDPR Chapter V requirements is:


Transfers may be made where the Commission has decided that a third country, a territory or one or more specific sectors in the third country, or an international organisation ensures an adequate level of protection.


Whether a country provides an adequate level of protection is decided by the European Commision:


The European Commission has so far recognised Andorra, Argentina, Canada (commercial organisations), Faroe Islands, Guernsey, Israel, Isle of Man, Jersey, New Zealand, Switzerland, Uruguay and the US (limited to the Privacy Shield framework) as providing adequate protection. Adequacy talks are ongoing with Japan and South Korea.


In the case of transfers to the US this is on a company rather than country basis under the EU-US and Swiss-US Privacy Shield framework. Google is certified under the Privacy Shield so personal data on Google’s infrastructure should have the adequate level of protection. A consideration for developers is if your add-on sends personal data to other non-Google services is whether those services are in recognised countries or have been certified under the Privacy Shield.


Summary


Hopefully this post has highlighted some actionable steps you can take now. In writing this post both myself and Steve contacted Google asking for clarification for add-on developers. Unfortunately there has been no comment from Google, but you can learn more about how Google is committed to GDPR compliance across G Suite and Google Cloud Platform services.


As noted at the start I’m not a lawyer or a data protection expert so if you have any corrections or additional information please share in the comments or get in touch.


Additional Resources


The GDPR is a big topic but sites like the UK’s Information Commissioner’s Office (ICO) have lots of useful resources to help with the GDPR.

Opportunity knocks: Using GDPR to strengthen virtual teams


This is a joint post with Maren Deepwell (cross-posted here). If you have missed our earlier posts we encourage you to revisit the beginning of the story of how we, as senior staff, lead our organisation to adopt virtual operations.


May


This month we discuss our approach to GDPR, evolving virtual working practices and the importance of explaining the reasons for new procedures as part of implementing them.


Maren: We at the end of a super busy month and part of what’s been keeping us busy is GDPR… (thanks for writing handy blog posts for us to reference here). We’ve worked hard on the contractual, technical & legal aspects, but it’s also been an opportunity to review our relatively new virtual working practices. One issue I have been thinking about is finding the right balance between providing guidance and support whilst ensuring individuals also take appropriate responsibility. For example, we have policies about how to secure laptops or delete temporary files and we regularly review these as a team and share updates on how we are implementing them. Yet even though you can monitor and review processes regularly there is a large element of trust in our virtual working culture. To some extent we have to rely on everyone taking responsibility and making it part of their day to day working habits to follow new procedures. Explaining the reasons why we mandate certain things should help ensure that everyone understands their importance. In the GDPR training we did as a team, talking about how the new legislation relates to our values as an organisation (e.g. how that is reflected in ALT’s Privacy Policy) and why it affects us as staff on an individual basis was a really important moment for me. What’s your view on this?


Martin: GDPR has been a great opportunity to think about how as a team we store and process data. As a data controller one of the things we have implemented is documenting our data processing activities which includes how and where data is stored. Another critical aspect is how data is transferred. For our team this is greatly simplified by predominately being a Chromebook based organisation with centrally managed devices. This means we can mitigate a number of risks through device security policies and the build-in security features of Chrome OS. Another key aspect is we have a ‘home working’ rather than ‘remote working’ policy. This removes risks associated with regularly using open wifi networks in places like coffee shops, but does however leave open two questions: how do we ensure the security of home networks; and given a number of our team also travel maintaining security on the road. The process of preparing for GDPR has highlighted that there is more we can do to secure data transfer, the solution being investigating VPN options. Besides the technical solutions it’s also been useful to reflect on how the team is responding to personal responsibilities mentioned. In the case of GDPR it’s been great to see our team respond to the training we’ve provided and being proactive in both highlighting areas where our procedures can be improved and also suggesting or making the changes required themselves. Not being co-located removes some of the opportunities to get an idea of how someone is doing, for example, body language is largely filtered out in Google Hangouts. It was only when I reflected on this that I realised I’ve started relying on other indicators.  Has our work around GDPR highlighted anything like this for you?


Maren: You make an interesting point about tangible and less tangible indicators and how they can help inform our approach to supporting and leading the team. As you say, GDPR has created a lot of crossover between policies that apply to our organisation as a whole, publicly, like the privacy policy, as well as the workflows that support membership services, and reaching over to personal working practices at home and whilst travelling. Tangible examples of how all the new procedures and policies are being implemented, like seeing new forms, or workflows or questions being discussed, is important. Together with the reporting and monitoring processes we use, these kinds of indicators enable me to manage the operational side of things. The less tangible things you refer to are harder to pinpoint, but I am also finding them more important since we have become a virtual team. They could be things like a casual comment or an informal conversation or something I spot when screen-sharing or working on a shared document. The more time I spend collaborating, the more I get a sense of how things are going. We have mentioned before how we have a ‘Show & Tell’ element at each of our weekly team meetings and recently we had several weeks of sharing what we use to manage our to do lists and plan our work. For the next month or so we will include a GDPR element in each team meeting, with everyone bringing examples of how they are implementing the new policies. All of these opportunities to collaborate, hearing colleagues think out loud, are valuable for helping me understand how others think or see things, and that enables me to better explain/support new processes.


Martin: Another aspect of our GDPR implementation I’ve been reflecting on is the degree of visibility of our individual activity to each other. In the case of GDPR I put a lot of effort into researching what we were required to do as an organisation and understanding various aspects of the new regulations from a legal and practical perspective. Parts of this process left very few tangible outputs and in some cases some of the outputs were not suitable for circulation in the team. It was a reminder that it’s not always possible to share everything we do and a level of trust is required. It was also a reminder of why our weekly team meetings are so important and arguably more important than if we were working in a face-to-face setting. You mentioned that our ‘Show & Tell’ has recently focused on sharing how we each plan our work. It was interesting to see the diversity of approaches and the varying levels of detail that we each use. As my role is very diverse rather than having a single method I adapt my approach. For example, in the case of GDPR I’m using a mixture of our GDPR action plan in Google Documents and Sheets, Google Keep lists and managing my inbox with labels, stars/flags and snooze. For other projects like the Annual Conference we have a shared project plan we can all report our progress against. In the case of the Annual Conference this has changed little from when most of the team was office based. I think this still works well but wonder if we were creating this from scratch as a virtual team would you do something different, in particular, to increase the visibility of what we are all doing at a particular time?  


Maren: I’d like to do that – spend time thinking about what starting from scratch would look like. I imagine that (1) our values, (2) the importance of working together with volunteers, our Members, and (3) our overall policies for working would remain constant. But… there are other factors: the size of the team means necessarily that many tasks are more independent and only some a consistent team effort. With 5-6 staff you can’t easily create sub-teams for example that would work together ‘more visibly’. I’ve also considered tools like Trello or Slack, but I’m not sure how well they’d work for everyone, and I feel allowing everyone a choice in which methods to use for organising work, e.g. what we shared in our to do list show & tell sessions, can really contribute to productivity. We have our overall operational plan which all other plans/lists are related to and in my mind that provides the consistency required – although maybe we could make use of it more frequently. Overall the high level of our output and achievement is a good indicator that our current practice is effective, and that is reassuring. What I mean to say is that we have an opportunity rather than having to reimagine what this could look like. Hearing you reflect on your perspective and comparing it to my own has opened up the question what this looks like for each of us. With a team away day coming up in a couple of weeks we could take the opportunity to dedicate some time to reflect on this as a group


This is a joint post with Maren Deepwell (cross-posted here) continuing the story of how we, as senior staff, lead our organisation to adopt virtual operations. This category includes previous posts.


June


Last time we discussed how we used GDPR as an opportunity to strengthen how we work as a virtual team. Since then we had our first face to face team day, an important milestone in creating a blended approach to running a virtual organisation.



Maren: The team day was followed the next day with the meeting of the Board of Trustees, in which for the first time all of our colleagues took part. I learnt a lot from both days: for example, I had to adjust my mindset from having a ‘team day out’ (which was previously the only time we would all travel somewhere together once a year to do some team building and have a meal together) to having a day working together. Whilst that may seem an obvious point to make, it’s an important distinction to communicate to everyone involved. The agenda we set out helped us prepare and be ready to focus on the task at hand. In order to make the most of team days we plan them in different locations. This time it included a site visit to the venue for our upcoming conference. Whilst combining activities like a site visit with a day working together face to face as a lot of advantages, having a changing location means that we need to sort out all the practicalities, like somewhere quiet to work or access to WiFi afresh each time. That’s a big change from having an office at which we convene and also means ensuring that we think about how to support the team as a group, taking into account individual needs. Some of the startup leaders I work with talk about how this degree of agility is difficult to make work in an equitable way and I found that the two days took more preparation than I had anticipated. As well as logistics, it is important that these days reflect our aims and values as an organisation. Inviting all staff to take part in the Board meeting is a good example of how we are trying to do this, but as well as time to work together, we also planned in time to eat and catch up informally and I felt those pockets of unstructured time were really important. This time, the Study, Museum of Manchester, became our work space for the day.


Martin: The opportunity to work in new places is a nice feature of our approach, particularly when they are as nice as the Study. Thinking about the practicalities some thoughts that came to mind are with a team of 5, soon growing to 6, we are perhaps at the limits of places where it is easy to just turn up and get a seat. If I’ve got time before/after events or meetings in Edinburgh I’ll often go to the Dome on Potterrow, or when in Glasgow the Saltire Centre. These are great spaces but get very busy at times and you’d struggle to get more than 6 people around the same table. There however seems to be an explosion of affordable hot desking and meeting spaces, in particular, as part of start-up hubs. As a membership organisation there is also perhaps opportunities to combine visiting some of our Organisational Members and have some space for a team meeting. Combining the two could be interesting as it would let other members of the team see what we do. I suppose the danger of such an approach is the site visit becomes a distraction from getting team business done. Something else that came out of our meeting which I hadn’t really considered before is while I was happy and able to get up and leave for Manchester at 4:30am this might not be possible for everyone (plus getting home after 11:30pm because of delayed flights ain’t fun). Raises interesting questions about equitability.


Maren: That’s true. As our team is distributed across the U.K. every location means longer travel times for someone. Achieving a balance between the flexibility of home working day to day and occasional travel whilst meeting our organisations’ needs can be a challenge. Organising the logistics is still a learning curve, but the other aspect I’d like to talk about is the actual work we did together. There were three parts to our day: first, the site visit which we all took part in and which helped us plan for the upcoming event. I think it was very useful to have everyone contribute ideas and ask questions on site and it saved us a lot of reporting back and another visit in the long run. Next, we spent some time together reviewing our project plan. We do review the plan regularly during our virtual team meetings, but it was insightful to do so in person as a team, seeing it projected on a big screen. For example, I noticed that we discussed more, asked more questions of each other. That continued when we spent time together working on different things after lunch. There was an opportunity for building rapport more informally that I hope will translate at least to some degree into our virtual collaboration. Two of our team had not met in person before, so that was an important function of the day, too.


Martin: I agree that it felt like there was more collaboration and communication when we were working together in the same room. I think this underlines that it’s important to recognise there are differences between working as a distributed team rather than in a shared physical space. Until we all have a holodeck in our homes I don’t see this changing but it also is a reminder that we need to think about distributed teams differently and not just faithfully recreate the physical space online. This was captured a post by Noello Daley on ‘What Co-located Teams Can Learn From Remote Teams’. In particular, Noello highlighted “the importance of shifting not only process, but mindset”. Noello goes on to say that you should “shift from a local, spoken culture to a global, written culture”. I think ALT had a strong global written culture before become a distributed organisation so perhaps the difference is less apparent to me. The process and mindset is an interesting area. There is inevitably disruption in becoming a distributed team and in some ways you want to minimise this to allow people to adapt to a new working environment, the tension as a manager you don’t always want to continue old processes either because they are less efficient or don’t work in the new model. One of the challenges is something else Noello highlighted was in distributed teams there needs to be “empathy for each others’ needs”. This is something we have touched upon already. I feel part of the solution is also touched upon by Noello and mirrors our own blended approach: “plan to get together about four times per year. Use that time to re-establish team goals and culture”. Given the Manchester trip was an opportunity to share our individual perspectives of goals achieved to our Trustees and the process for identifying those goals was collaborative I think we should be having more staff days around Board meetings.


Maren: I agree. Clearly our first attempt has highlighted how much potential these days have. But what about the day to day? Following on from the post by Noello you mention, I have been reflecting on how important it is to take some personal responsibility for what this recent article (found via @hopkinsdavid, thank you!) called ‘finding connection points’: the author suggests taking time to schedule regular lunches with co-workers, meet clients (Members, in our case) and getting involved in the community to ensure there is some face to face contact or at least time focused on building relationships. Some of this I would have previously thought about more in the context of CPD. As a small team we have long set up days to visit other organisations or Members to learn from and see how they do things, but in the context of operating as a virtual team, that’s taken on a different significance. Now it’s about making connecting with others an integral part of what we do as individuals, part of our professional practice. We can set an example and enable others, but to some extent it depends on how much an individual is willing to or interested in being part and contributing to that kind of working culture. Tech-focused solutions, like for example shared bookmarks, can help build a sense of shared space online. Yet it still depends on everyone contributing, everyone recognising the importance of working in a certain way and what benefits that has for us as a team and the organisation as a whole. In the last few hours of our team day we each prepared to talk to the Board about some key ways in which we have individually contributed to the success of the organisation in the past year and listening to that was probably the most powerful moment of the two days for me. We may not all be in the same place very often, but we are all part of the same vision and when we presented to each other I really felt that come across.

I recently did a Q&A session as a Hangouts On Air. As the recording is quite long I wanted to provide a quick way for people to jump to particular questions. Both YouTube and Google+ make this a lot easier by detecting time codes in your description or post text and converting them to links that take you to that part of the video.




[See this post]


You can just listen back to your YouTube clip and take a note of the timecodes you want to highlight, but I wanted a quicker way. One option is to download the subtitle file automatically generated by YouTube and scan the text to find the parts of your video you want to highlight. To make this easier for me I created a Google Sheets template where I can read through the subtitle text and create bookmarks at specific parts:




[View in Google Sheets]


Making your own video jump list



  1. Make a copy of this template

  2. On your YouTube channel find your video and in the edit mode click on Subtitles/CC and then the subtitle file you want to use:


  3. In the Subtitle edit page click on ‘Actions’ and download the .sbv file (only .sbv works with this template)


  4. Open the downloaded .sbv file in a text editor, select all the text and copy.

  5. In your copy of the Google Sheet template click in cell B2 and paste all the text you copied in the previous step.

  6. In the rows you would like create a bookmark add your text in column C:


  7. Once you’ve added all your bookmarks you can copy/paste your list starting in cell E2 into your YouTube video description or G+ post which has your YouTube video link in it:



You should now have a video description like this or a G+ post like this. As YouTube/Google have done all the hard work creating the hyperlinks in the text you can also copy/paste to other places, for example, this Google Sites page. As well as improving navigation to your YouTube video if metrics is your driver you’ll also get extra view hits each time a jump list item is used :).

I’m currently at Google Cloud Next ’18 avidly following the Google Apps Script and education related sessions, which I’ll try and capture in this tag.


Course Kit is going to be a suite of tools designed to help educators manage assignments using Google Docs within your existing VLE by using the LTI standard.


We’re bringing the collaborative capabilities of Google Docs and Drive to Learning Management Systems through the Learning Tools Interoperability standard. The toolkit will include an assignment tool which allows instructors to create and grade assignments using Google Docs within their LMS, as well as file embed tool to embed Google Drive files.


Course Kit is designed to remove some of the main points if instructors what to integrate Google Docs as part of assessments. With the LTI integration teachers don’t have to leave their VLE, being able to setup the assignment. Course Kit also extends the build-in functionality of Google Docs. At submission the learner is removed as an owner of the Google Doc but a copy of their submission is automatically copied to the students Google Drive. To easily move between submissions at the top of each Google Doc is a student dropdown switcher.



One of the biggest features is the modification to the existing Google Docs comments feature which enables you to quickly use a bank of canned feedback responses. This can either be used by starting a comment with a ‘#’ and typing a term which populates an autocomplete list. The feedback bank can be added to as you type or you can bulk import an existing list.  Feedback comments can also we added from the sidebar.


Whilst the Google Doc is being graded the usual comment email notifications are disabled so the learner won’t see draft comments until the assignment is returned. As well as adding feedback to the Google Doc you can add a grade and any final comment. When the teacher returns the Google Doc to the student they are returned as an owner and this time a copy of the document with the feedback comments.


Course Kit has initially developed with Higher Education in mind and has been developed with input from some G Suite Edu customers and is now looking for more beta testers to join via  g.co/coursekit. The  video below gives a quick overview:



Course Kit is currently completely separate to Google Classroom, continuing Google’s trend of having duplicate products. My understanding is that Course Kit doesn’t work as a standalone product and needs the LTI integration within your VLE. As there are people already asking for similar features in Classroom it sounded likely that there would be some sort of integration at a later date. In terms of other feature request perhaps not surprising that plagiarism detection is already at the top of the list…

At Google Cloud Next ‘18 it was great to see and hear about a number of G Suite and Google Apps Script updates. In the past when I’ve presented on App Script a common question used to be what was the sustainability of the product. Given the continued investment in Google Apps Script and it’s integration within other Google products like AdSense, Data Studio and App Maker to name a few, the future looks very promising. There were also some impressive stats shared about Google Apps Script at Next: 3.3B weekly executions; 8.7M weekly end users; and 270K weekly developers.


Google Apps Script Stats


Some of those 270K developers had the opportunity to meet-up at Next, many meeting face-to-face for the first time, as well as an opportunity to speak to some of the Googlers who are supporting Apps Script developers.


Google Apps Script Meetup at Google Cloud Next '18


As part of Next ‘18 there were a number of sessions introducing and highlighting user stories, a summary of some of these have been highlighted in this list compiled by Wesley Chun and I’ve created a YouTube playlist of recorded sessions).


Getting up to speed – recent updates


If you have a look at the Google Apps Script release notes even before Next ‘18 there have been a number of recent updates which includes:



  • Sheets – 100+ new methods

  • Gmail – new methods including GmailDraft

  • Calendar – triggers for new and modified events and 7 new methods

  • Slides – 40+ new methods

  • Google Apps Script dashboard

  • Google Apps Script API which enables:

    • Create, read, and update Apps Script projects.

    • Create and manage project versions.

    • Create and manage project deployments.

    • Monitor script use and metrics.

    • Run script functions remotely.



  • clasp – a Google Apps Script Command Line Interface (CLI) which now supports TypeScript.


Other recent announcements include the Hangouts Chat Bots, which can be powered by Google Apps Script, were announced in February 2018, news of Calendar Add-ons broke in June 2018 and the Google Sheets Macro Recorder was launched in April 2018.


Launched and roadmap developments from Google Next ‘18


Launched


Announced at Next ‘18 were:



  • OAuth Whitelisting – control over Apps Script projects that can be run in your organisation

  • 30 Minute Executions – these were in early access but are now available business, enterprise and education customers


Coming soon


Announced as ‘coming soon’ were a number of core updates to Google Apps Script :



  • Modern Javascript – A common request is updating the JavaScript syntax to be able to use more modern features and libraries. At Next ‘18 it was announced that this would be supported by moving to ECMAScript 2017 scripting-language specification.

  • Performance – a tenfold (x10) faster execution

  • Job Service – breaking long running jobs into batches that can be run in parallel

  • Reliability – “simplified and more robust architecture for enterprise grade reliability” … whatever that is.

  • Flexible Quotas – removal of daily quotas are replaced with high cap rate limits. You can sign-up for early access.

  • Managed Projects – domain policies for Apps Script projects with associate cloud console cloud projects

  • G Suite Developers Hub – expanded dashboard with templates, starred projects and you’ll be able see the status/health and manage all of your triggers.


Apps Script Training Resources


As part of Next ‘18 the event was top and tailed with a number of boot camps including one on “Power your apps with Gmail, Google Drive, Calendar, Sheets, Slides and more”. The codelabs used as part of these is available for free from the following links:



There were other product updates announced at Next ‘18 and of particular interest are updates to Google Sheets which have been compiled in a post by Ben Collins. Overall the future of Google Apps Script looks very healthy…


This is a joint post with Maren Deepwell (cross-posted here) about how we, as senior staff, lead our organisation to adopt virtual operations. You can read previous posts in this series.


We’re six months into our journey and in this special edition we look back at the highs and lows, share practical things we’ve learnt along the way and take our conversation from post to podcast to help us reflect and look ahead.



Maren: when you suggested that we share our experiences openly I wasn’t really sure what to expect. After six months, I find it’s become a valuable part of my practice. It acts as a regular prompt to reflect not only on my own work, but the team and the organisation’s progress; it’s made us set aside time to have a regular dialogue about important, but not urgent things; and it’s helped me find a voice to share more openly in a way I hadn’t done before. Also, on a practical level, a written conversation helps alleviate my tendency to interrupt someone before they are finished. Looking back at our first post one of the key things I’ve learnt is that this process itself, as well as the output, is important. I’d encourage anyone to build sufficient rapport and trust to try out a similar approach to collaborating. That said, I’ve also found aspects of it challenging! For example, trying to strike the right balance between sharing and respecting the boundaries of what can’t be shared has been difficult at times. Or deciding what to focus on, what might be useful to others. It’s quite a big risk to take to share leading a transition whilst it’s happening and I’m grateful that both the Board and the team have been supportive from the beginning.


Martin: The process of writing these posts has been very useful. The asynchronous nature of writing and opportunity to discuss our thoughts has created a space to reflect on where we are at and think about the future. Finding the right balance can be tricky. As part of ALT’s remit we are keen that as well as sharing the positive impact that technology has on learning and teaching that there is also an opportunity to share when things went wrong or didn’t work out. As you highlight as part of our transition it’s been important that we retain the trust and moral of the rest of the team. I should say that looking back over the last 6 months there have been no issues that haven’t been relatively easy to resolve. A challenge that came up in our February update was providing remote IT support. Overall we’ve had very few issues to deal with. Something at the back of my mind is our reliance on personal home broadband connections. Recently I changed by broadband provider. As I had overlapping contracts in terms of connectivity the change was seamless (the new WiFi router did however kill my home print server). Having checked the new providers broadband speed before signing up I was confident there wasn’t going to be any issues with speed. But what if there was an issue, or the connection can’t cope with the extra load from my daughter being at home during the school holidays? There is also the challenge of providing remote support when there is zero connectivity. We have some contingency plans in place for these situations but I think this is an area we can work more on over the next 6 months.


Maren: You are right, in the first few months we thought a lot about infrastructure, because that was the most practical aspect of making this transition. On an ongoing basis, too, with staffing changes, moving house etc, the infrastructure is always a priority. I can now travel with our phone system in my pocket, and we can take our “office” with us when we run large events so that the whole team can be on site rather than someone having to cover the physical office space. It’s a powerful transformation. In our post about March we started discussing more about supporting staff, collaborating and also how this way of working influences our work as Learning Technologists. Here are three examples of how virtual working has changed my professional practice as a Learning Technologist: first, technology fails at team meetings. We’ve talked about our weekly team meetings a lot and how they form a cornerstone of us working together effectively as a team. We have had all kinds of technology fails, individually and as a team, over the past year and I think it feels very similar to getting things wrong in front of a room full of students or colleagues and the experience has made me have much greater empathy with someone nervous to try something new. Secondly, I have a much greater appetite for finding technological solutions to problems we identify. Bringing all of our operations into the virtual domain and working together with our Trustees, Members and colleagues in the same blended way has created a real sense of opportunity to improve what we do and that in itself has been really exciting (although maybe my enthusiasm for doing things better did not need additional motivation). Last, and this relates directly to my role as a Line Manager, I think the quality of communication is more important than anything else I do and that shapes all aspects of my practice. Regardless of whether it’s a chat message, video call, email, phone call or indeed meeting up face to face, I work hard to communicate equally well in every mode. As a virtual team we have to cope with difficult conversations, bad days and unexpected crisis online. Communication has always been really important to me and to how I succeed, but I am developing new skills and strategies through our virtual way of working. It feels like we are creating an effective blend of processes, technology & culture to build resilience for the highs and lows that we face and communication is at the heart of that. It may seem obvious, but our work over the past six months has really expanded my horizons on what that means in practice.


Martin: Communication was something that came up in the May blog post. Whilst this was a busy time dealing with implementing GDPR we reflected on knowing how the team were coping and visibility when when working remotely. This month I attended Google Cloud Next ‘18, Google’s main conference for sharing what it’s doing in it’s cloud platforms including G Suite. G Suite is one of the core tools we use at ALT and has been invaluable in making it possible to work as a distributed team. Google use this event to launch new tools and enhancements to its products and from the sessions I attended improving communication was a strong theme, as one of the presenters put it “the ability to communicate effectively defines success”. In terms of what we discussed in May a new product that caught my eye was Hangouts Chat. This is a new version of Google’s chat tool similar to tools like Slack or Microsoft Teams. One of the features of Hangouts Chat is quick actions and reactions to messages. As part of this Google highlight emoji reactions “to build stronger, quicker and more expressive communications”:



I’ve mixed emotions about emoji’s, particularly in a work context. I don’t mind using some emoticons like 🙂 in messages but as I don’t like the emoji palette Google uses I’ve turned off the feature that automatically turning text emoticons to emojis (e.g. 🙂 -> ). Also given my own feelings about the Google emojis at times I feel reluctant to impose these on others, using as an emoticon I feel there is a degree of subtlety. Emoji’s or ?


Maren: I should probably start by saying that I don’t really use emoji much at all – neither at work nor in my personal communication. The person I text with most is my mother and she uses one emoji: little pink hearts at the end of a message. In her case that can mean anything from “thanks for taking me to the hospital today” to “sending you lots of love” to “take care and have a nice weekend”. I’m one of those people who mostly texts in full sentences with punctuation. That said, I have started using emoticons more in a work context since I started working as part of a distributed team. In chat, like Google chat such as we use for informal or immediate communications, I do find it useful to be able to convey my meaning in more than words. There are many instances when informal, but important conversations can be more nuanced – although that also depends on the person who i am chatting with and how well we know each other. Whilst in theory icon based communication should be more easily understood than words, I find that in practice most people I communicate with have very specific patterns in their use of icons and over time I learn what they mean. When I use tools like Slack for projects, I mainly use emoji such as to signal that I have seen a message, to show that I am participating or supporting something and for me that kind of interaction quickly becomes less meaningful. It’s like ‘likes’ or ‘hearts’ on social media. It’s useful, but limited. And I also dislike the Google chat and the iOS emoji palette even if they have become more diverse in recent years. Now, to answer your question: in a broader context, with a bigger user base and in contexts where being able to interact more, I’d probably say “Emoji? – ” – but in our immediate context of leading a distributed team, I think it’s .


That wraps up our written conversation for this month, but we are also experimenting with the format by recording a special podcast this time, reflecting a bit more on the six months since we started this project and talking about what’s ahead and where we hope to be by the end of the first year.




Next month it will be the 25th Association for Learning Technology Annual Conference. As a member of the ALT staff team I’ve attend the conference every year from 2013. My first Annual Conference was in 2009 and is memorable for a number of reasons. At the time I was working for the Jisc RSC Scotland North & East and in part supporting the development of EduApps, a collection of portable assistive software applications that could be run from a USB pendrive (you can read more about this project in an article on EduApps for the ALT Online Newsletter). We received funding that year from Jisc to distribute a copy of EduApps to all ALT-C 2009 delegates. As part of that I was able to experience the Annual Conference as a presenter and delegate. I remember being in awe seeing many of the names I knew in the edtech community walking the corridors of Manchester. As it turned out the session that I most remember was by someone I never knew about before the conference. The session was by Joss Winn on ‘WordPress Multi-User: BuddyPress and Beyond’. I didn’t know Joss before attending the conference but remember being impressed by his depth of knowledge, openness and willingness to share his expertise. Over the years since 2009 I’ve continued to follow Joss’ work and his original conference presentation continues to influence my own interests … not least the ALT conference system I support and develop uses BuddyPress.


So the ALT Annual Conferences, like hopefully all ALT events, is an opportunity to make new connections in knowledge and friendship.





On 23 March 2018 Twitter announced that it was retiring search timeline widgets, suggesting people moved to a Curate a Collection of Tweets. For a lot of people, myself included, this is far from an ideal solution. Given the number of hashtag communities I’m part of I’d much prefer to set something up and let it run in the background.


For TAGS users the good news is I’ve created a custom widget to display the last 10 results from a TAGS Archive. This means you can display a search timeline which will automatically display new results when these are collected in TAGS. If you are reading this from my site rather than via a RSS reader an example is embedded below:



Creating your own Twitter search timeline widget


To create your own widget either setup a TAGS archive or use an existing one. When the Google Sheet is shared so that ‘anyone with a link can view’ use the TAGS Widget Setup Page to configure the widget appearance before grabbing the embed code:


TAGS Widget Setup


Note: Given TAGS defaults to updating every hour the widget content currently only updates when the page is refreshed.


Under the hood


The widget itself runs off a basic HTML/JavaScript page. Twitter provide a JavaScript API that is able to scan a page for blockquote.twitter-tweet elements and turn them into fully-rendered embedded Tweets. To get the blockquote.twitter-tweet elements into a page I’m using the Google Visualisation API to query a Google Sheet and pull content into a Table Chart. There are actually a number of libraries that can read a Google Sheet into a webpage. The reason I went with this particular combination is there is a handy PatternFormat method that makes it easy to combine values from different columns required to create a blockquote.twitter-tweet the Twitter JavaScript API recognises:



// format the text column into a format that Twitter will convert into a tweet
var formatter = new google.visualization.PatternFormat('');
formatter.format(data, [c['from_user'], c['text'], c['time'], c['id_str']], c['text']); // Apply formatter and set the formatted value of the first column.

The Google Visualisation API also has an addListener() method which allows us to wait until the table is ready before asking the Twitter widget to render each tweet:



google.visualization.events.addListener(table, 'ready', function () {
twttr.widgets.load();
});

The last trick is to tweak the css of the embedded tweet to reduce the font size. To do this we need to jump into the Shadow DOM the Twitter widget JavaScript generates and add some custom styling:



twttr.events.bind('rendered', function (event) {
var hosts = document.getElementsByTagName('twitterwidget');
var hostList = [].slice.call(hosts);
hostList.forEach(function(el){
var style = document.createElement('style');
style.innerHTML = '.SandboxRoot.env-bp-min, .SandboxRoot.env-bp-min .TweetAction-stat, .SandboxRoot.env-bp-min .TweetAuthor-screenName, .SandboxRoot.env-bp-min .Tweet-alert, .SandboxRoot.env-bp-min .Tweet-authorScreenName, .SandboxRoot.env-bp-min .Tweet-card, .SandboxRoot.env-bp-min .Tweet-inReplyTo, .SandboxRoot.env-bp-min .Tweet-metadata { font-size: 0.9em; }';
el.shadowRoot.appendChild(style)
});
// when tweets have rendered remove loader
loaded();
});

If you are interested in exploring the source code I’ve posted it on Github and created a TAGS Widget forum to capture questions and discussion. Enjoy!






This post continues our series on ‘openly sharing our approach to leading a virtual team’ – a joint project with Maren Deepwell (cross-posted here) for which we write a monthly blog post, some of which are special podcast/conference editions.



August



We are at the end of the busiest month of our organisation’s year in the run up to our Annual Conference. Getting here is a big milestone for our organisation and a real test for our approach to leading a virtual team.



Maren: We have been busy with preparations for the conference and our team of six has been working with over a hundred volunteers who contribute to organising the event. It’s usual for us to work with Members all across the U.K., but during the past month we’ve had to communicate and collaborate significantly more than usual. We’ve put our still quite new processes and working culture under real pressure, and we’ve made it through the toughest weeks in good shape. We’ve learnt a lot along the way, but I’m proud how well we’ve worked together. Now, as we get ready to take the whole team to the event, my thoughts are on the face to face side of our predominantly online working lives. We put a lot of thought into delivering the best possible experience for participants of the event and we talked about this recently on Edutalk radio with the Chair and President of ALT. We all agreed that there is special value in being able to take part in person. So our job for our team is to make sure that we have a plan for supporting each other over 4 long and busy days, so that we can all do our best. I’ve been thinking about a couple of things we started to discuss in June, when we had our first team day. For example, getting together to run this event also means seeing each other and working together in person for the first time in a while or ever. That’s not insignificant. Also, each year we have colleagues for whom this is their first experience of this event and although I’ve got previous years to draw on, each year is different and we have only half a day to get ready. Talking through each day in advance, planning meals and breaks together, and being clear about expectations about when we work and when we have down time helps get us all on the same page before we arrive. It also makes it easier to adjust from working at your desk at home to being with colleagues hosting 400 participants. What are your thoughts?



Martin: Looking back over August it’s interesting to reflect on the number of conversations we’ve had on supporting our team during the conference. Confidence and expectations were areas that came up a couple of times. With many of us never fully experiencing all 4 days of our Annual Conference before I think it’s a difficult line to walk in terms of planning for some of the potential pressure points whilst not unduly impacting on our confidence. Something that I thought was very useful as part of one of our online team meetings was a round robin to see how everyone was doing in terms of conference workload. The continual challenge I see in distributed teams is maintaining group awareness. This includes knowing what others are working on, where they are up to in specific tasks and what they are planning on working on next. In the last six months the team has grown by 20% from 5 to 6. It’s nice to have an extra pair of hands or a new colleague who is able to contribute to our delivery and we are already seeing the benefits of this, but at the same time we now have a bigger team to try and maintain an awareness of what we are each are working on, more people who have a voice at our weekly team meetings. I’ve not calculated how much extra time this takes each time we have new staff join. In software development you could point to Brooks Law which states “the complexity and communication costs of a project rise with the square of the number of developers”. I think in our case this would be a gross overestimate, and even in software development a number of people have questioned Brooks Law, but it’s interesting to consider the implications of growing a distributed team. I do believe investing time in gaining better awareness is still very useful. All the planning and preparation will hopefully result in a positive experience for all. Do you feel your spending more time managing a larger team?



Maren: definitely. Half of my focus is on observing, listening, supporting, advising… it’s the busiest time of the year, it’s a crunch point, it’s naturally a big part of my work just now. But half of my focus is far in the future, 2,3 or even 10 years from now, and our conference provides essential input to navigating what’s ahead, to be ambitious, to nurture the vision in my head, in my heart. Even one year from now things will be quite different and looking back over the past five years reminds me how much things have already changed. How far we’ve come. Any highly performing team feels growing pains when moving from the success of achieving at one level to moving up to the next. Things stop working in the way they have done before new dynamics are established. Getting through a conference together is a good bonding experience to build on and I feel that this is easier to accomplish in person rather than online when everyone is distributed. It seems very achievable to build individual working relationships virtually and over the past year or two I have gained experience in that, but group dynamics are harder to establish and our blended approach, seeing each other at events and team days, is important here. Five years ago when our team grew to include your role as a second senior member of staff, I had to learn all the things I now rely on. It took time for us to figure out how we would lead things together, assess each other’s strengths and where support would be most needed. Whilst we can’t put an exact figure on it, it’s fair to say that it took a lot of time establishing a senior staff team and that we continue to invest time and effort into that as things evolve. But, it has more than doubled our achievements as a result, increased capacity and resilience in many ways. And, at this time of year, it also provides us with a safe space to assess how we are coping with pressure and I know someone has my back if the answer is not very well. It’s opened my eyes to how valuable it is to invest in communication and team work whereas before I would have probably argued to be more effective doing things myself.



Martin: The difference between building individual and team dynamics in a distributed organisation is very interesting. I was recently reading a paper on ‘Group Awareness in Distributed Software Development’ which included the conclusion that ‘occasional face-to-face gatherings assist group awareness’, something you’ve also highlighted earlier. I was wondering if the intensity of a 3 day, 400+ delegate conference was the best occasion for what will be for some their first face-to-face meeting. Thinking back to my first time being part of the ALT conference, which also happened to be when I was also a distributed member of staff, my first face-to-face meeting with a number of the team was a group lunch the day before the conference started. As a more socially focused activity it allowed there to be more spontaneous communication also I believe creating an opportunity to strengthen a shared team identity. Time is also a factor identified by Hinds and Mortensen: “relationships between distant team members become more harmonious over time as teams develop familiarity and shared processes”. The quality of the time and mix of informal and formal all hopefully support a stronger team and is also a great excuse to continue what has become the traditional team visit to a pizzeria the night before our big events.



Maren: hmmmmm… pizza. Definitely a tradition I approve of. I hadn’t come across either of the references you mention, and they make for interesting reading. It’s thought provoking to see a more analytical approach. I always see relationships and team dynamics as messy, shifting, unpredictable with many known and unknown unknowns. Every year and every conference turns out to have surprises in store and that is why the months of preparations are so important. We develop trust in our processes and plans, we form the habit to rely on our lists, we solve problems together. By the time I arrive at an event we’ve organised, I’ve got a list that keeps me on track and am ready to enjoy the experience. Whatever unexpected twists and turns the days hold in store, this is the moment when I feel really privileged to have the job I have, when I see how our values are put into practice.



Martin: With all research there is likely to be a personal call as to whether it is applicable to the context you are interested in. I haven’t delved deep into this area yet but there are a number aspects I recognise or can relate to in our own distributed team. I think it’s also interesting to consider the quotes I pulled out in the context of the conference. For a number of our delegates the conference is that ‘occasional face-to-face gathering’ that helps them gain awareness of who and what is happening across sectors. One of the real strengths of our conference is it’s an opportunity for new and existing members of our community to make an initial connection that can be continued via various means. Thinking about this I ended up with a very long list that spread across various mediums including face-to-face local meetings of ALT Members Groups and SIGs, social aspects that feature in our conference platform, various mailing lists and dedicated online social spaces, the #altc tag and more. I hope through all of this our community is able to develop familiarity and shared knowledge in learning technology. Their participation and engagement in turn will inspire our next steps in leading our virtual team.






This post continues our series on openly sharing our approach to leading a virtual team – a joint project with Maren Deepwell (cross-posted here) for which we write a monthly blog post, some of which are special podcast/conference editions.



September



This month we discuss some of the more serious upsides to home working.



Maren: We’ve previously talked a lot about all the strategies we’ve adopted to support home working and the challenges it brings with it. But at the end of a few weeks of working as long and hard as we can the upside of working remotely, of not having to commute or be in an office is at the forefront of my mind. It’s the first time in ten years that I’m not heading out to work at this time of year (just after the largest event we run) and I’m finding it much easier to get on with things from home. As we are a small team, even one or two staff being absent has a big impact and that easily happens in cold & flu season. Being able to take short breaks, eat, walk around and even have a nap has enabled me to work better than I was able to in our office in previous years. In addition it’s easier to catch up on life after a week away from home. Laundry is easier to hang up when your desk is only a few steps away. Whilst I always prefer staff to take time off when sick, working from home seems often much more possible and productive when working in an office wouldn’t be. For instance, being able to wear warm, comfortable clothes, have tea, look out of the window… every small advantage of home working helps with being exhausted and I am finding that an unexpected bonus. My cat is a great home working companion and he helps get me through the day. How about you? What home working upsides are you finding helpful just now?



Martin: Being already at home for deliveries or tradespeople is a big win. It also saves money on childcare as I’m at home to see my daughter in from school. Usually I’ll get her setup with her homework and she is fine for the last couple of hours I need to work. Where it gets tricky is school holidays and when I need to travel. This has recently got harder as up until last year my wife was either doing her PhD, which gave her a lot of flexibility. Whilst her current full-time job has some work flexibility it’s not to the same degree. One of the nice things about working for ALT, even before moving to a distributed team, is its responsiveness to changes in personal circumstances and commitment to being a flexible employer. Something I was aware of when I started working from home, mainly thanks to my interest in wearables and fitness trackers, was the lack of activity I was getting each day. Whilst my office is in the attic and I get many trips to the kitchen for cups of tea it still falls short of the recommended daily activity. My solution for topping this up is to replace what would have been my morning commute with 50 minutes of exercise. As this is a mixture of a aerobic and weights it turns out it is actually better than my old commute which was a 30 minute walk to and from the station so you could argue working in a distributed team has helped me have a better lifestyle and overall wellbeing. Have you found you’ve replaced your own commute with anything?



Maren: In the past few years my personal circumstances have become a lot more complicated as I’ve become the carer for my parents. Working from home full time means I am now more easily able to juggle work and other commitments although travelling etc can also be a logistical challenge. Everyone has stuff they are trying to balance and being a distributed team makes that more possible in the long term. Regular exercise meanwhile is more of a recent addition to my lifestyle as I never found an activity I really enjoyed until I started running to raise funds for cancer research a few summers ago. What began as an attempt to give back to those who saved my mother’s life turned into an unexpected love for running. The balance and headspace I get from heading outside and clocking up miles has become very important to me, but when I was still office based the only time I could fit it in was very early in the morning and that became harder in the winter and less safe. Now, thanks to being home based, I can fit in a run more flexibly and keep active more regularly. To keep moving during the day, I also have a smart watch and one of the features it has is to remind me to get up every hour if I have sat still for too long. Other upsides for me are saving money not having to commute, eating better food and more cheaply, and being able to nap! I’ve become very good at napping and a half hour nap at lunchtime can make my afternoon more productive. There’s something here around not abusing the trust and freedom that comes with being a distributed team, about how personal and professional sides of life mix. We hear a lot about how work is starting to become more and more pervasive, but over the past 10 years I have also developed a healthy respect for how much the personal impacts on professional practice and performance. Working in a distributed team gives me a greater sense of empowerment to manage my time, but also responsibility to look out for my own wellbeing and work/life balance.



Martin: Trust is an interesting topic. When people find out I work remotely often the first question is how do I get up each morning. Some of this is actually enforced on me as I would need to get up anyway to get my daughter to school, enjoying my work is also a great motivator. My usual response to the question is it’s often not an issue to start work, the problem is to actually finish at the end of my working day. So as well as not abusing the trust in being in a distributed team, there is a degree of trust that you as an individual will look after your own wellbeing. The next question I often get asked is whether it is hard to work when the weather is sunny outside. Living in Scotland I immediately benefit from it being nice outside less often removing that temptation. When it is nice I will try and take advantage of this when I can. Our Wi-Fi extends to parts of the garden and we have various garden tables and chairs I can work from. The time I spend working outside is however restricted by tasks I can achieve on a single screen, at my desk I’ve got a 4 screen setup:






Even if I can’t work outside nicer weather is often the cue for me to have lunch outside or at least in our conservatory. Spending so much time at home I do occasionally find myself experiencing cabin fever. I only recently discovered that apparently even brief interactions with nature can go a long to ease isolation-induced depression. Unknowingly perhaps my body already knew this because as well as being a long time runner last year I bought my first road bike and often go on evening bike rides. As winter draws in these are curtailed and I find myself already trying to mentally prepare myself for the long grey winter days. What are the questions people ask you when they find out your work in a distributed team? Have you experience cabin fever yet?



Maren: The first question about working remotely I get asked is how I manage staff without supervising their work in person. How can I trust things are being done without seeing it, without being there etc. I rarely get asked how I myself cope with working culture, motivation or work life balance partly because I am a CEO and partly because of the assumption that I have it sorted (‘you are SO organised…’). My answer to the remote working question is that being part of a distributed team is a two way street. Staff need to want to do it, adjust or learn how to do things in a way that works for them AND the organisation. Everyone needs to be willing to make the most of the opportunities that being part of a virtual organisation offers, we can’t do that for them.



I struggle with loneliness and cabin fever and my mental well-being just as everyone else does, but ultimately I find working remotely liberating. I like the freedom and responsibility that comes with it and that is the biggest upside for me. The mentoring I’ve done over the past six years has shown me how important it is to me to be able to make things happen, to change a bit of the world (as cheesy as that sounds) and I feel more empowered to do that as part of a virtual team than I did when I was tied to a desk, managing an office space. Running virtual operations may take just as much effort, but there’s far more scope to improve and innovate than our previous working environment ever offered. That in turn really motivates me on dark, grey mornings or when I feel isolated. It also helps to have a bit of inspiration – which hangs above my desk:





That brings me to one last question for you: any tips for making the most of your physical work space at home?



Martin: In terms of physical space I’d certainly recommend trying to have a permanent corner in your house that you call your office. My office at home is also the spare room so I occasionally get turfed out when we have guests and whilst I can work on other parts of the house I find it hard to beat the comfort of my desk and office chair, plus everything is setup for me so on the morning I can just turn on my computer and I’m ready to go. Creating your own space is also an opportunity to think about the space/setup that’s going to work best for you. Given the rise of flexible and home based working there is a growing list of options for desk and clever storage systems that aren’t beige or grey. I know some remote and office based workers who are big fans of standing desks and treadmill desks. These aren’t options I’ve ever considered and would recommend talking to someone who uses these setups first. One of the big advantages I think of working from home is it’s an opportunity to create an environment that’s going to work for you. When I’ve been office based I’ve often encountered restrictions on how much personalisation you can do and being a home worker is an opportunity to perhaps say goodbye to that clean desk policy, restrictions on what food you can eat, or the noise you make – an opportunity to crack open the garlicky pasta, switch on the radio and keep comfortable in your PJs.  



Things we’ve been reading:








In March 2018 Google announced it was closing it’s Google URL Shortener giving developers one year to migrate to their new Firebase Dynamic Links (FDL) service or other providers. The URLShortener is also an Advanced Service in Google Apps Script and so far I’ve seen no news that a FDL service will be created in its place. Our organisation had a couple of projects that used the URLShortener Service, exclusively used to shorten long links (we were never used other features like analytics). As we also use the Bitly service using our custom domain it made sense to migrate our projects to use this instead.



Typically a call to the URL Shortener service would be:



var url = UrlShortener.Url.insert({
longUrl: A_LONG_URL_HERE
});
var short_url = url.id;


So we don’t have to change any core code I created the following shim below which is published as a library at: 1ddSpTQoae2xdocyx0GcfNCKOjZu8je_OFWXUM_-cG-fGVJIQyxGRrAnQ



Source Code



var Util = (function (ns) {
return {
getCachedProperty: function (key){
var cache = CacheService.getScriptCache()
var value = cache.get(key)
if (!value){
var value = PropertiesService.getScriptProperties().getProperty(key);
cache.put(key, value, 86400);
}
return value;
},
setToken: function(token){
Util.setScriptProperty_('BITLY_TOKEN', token);
},
setGUID: function(guid){
Util.setScriptProperty_('BITLY_GUID', guid);
},
CALL_: function(path,options){
var fetchOptions = {method:"",muteHttpExceptions:true, contentType:"application/json", headers:{Authorization:"Bearer "+Util.getCachedProperty('BITLY_TOKEN')}};
var url = 'https://api-ssl.bitly.com/v4' + path;
for(option in options){
fetchOptions[option] = options[option];
}
var response = UrlFetchApp.fetch(url, fetchOptions);
if(response.getResponseCode() != 200){
throw new Error(response.getContentText())
}else{
return JSON.parse(response.getContentText());
}
},
setScriptProperty_: function (key, value){
CacheService.getScriptCache().remove(key);
PropertiesService.getScriptProperties().setProperty(key, value)
}
}
})(Util || {});

var Url = (function (ns) {
return {
insert: function (obj){
var path = '/shorten';
var callOptions = {method:"POST",payload:JSON.stringify({
"long_url": obj.longUrl,
"group_guid": Util.getCachedProperty('BITLY_GUID')
})};
var r = Util.CALL_(path,callOptions)
return {id:r.link};
}
}
})(Url || {});

var Groups = (function (ns) {
return {
list: function(){
var path = '/groups';
var callOptions = {method:"GET"};
return Util.CALL_(path,callOptions);
}
}
})(Groups || {});


One slight variation when using the Bitly API v4 is the requirement to include a GROUP_GUID as explained in the Bitly v4 migration documentation. Rather than modifying every script we use that uses the existing UrlShortener to get the GROUP_GUID this as well as your Bitly token is stored in the library as a Script Property. This means once it is setup the only modification we need to make to our projects is switching off the existing UrlShortener Advanced Service and adding the new UrlShortener library. All of this is explained in the setup steps detailed below.



Using UrlShortener (Bitly version)



As this solution is designed to operate with a single Bitly account I’ve not implemented the oAuth 2.0 flow so instead you need your Bitly “Generic Access Token” which is explained in the steps below:



  1. Open new Google Apps Script project and in the Script Editor click Resources > Libraries… adding the following script id in the ‘Add a library’ field: 1ddSpTQoae2xdocyx0GcfNCKOjZu8je_OFWXUM_-cG-fGVJIQyxGRrAnQ
  2. Log in to Bitly account and from the ≡ menu navigate to Settings > Advanced Settings and click the OAuth link under For Developers. Now click on the ‘Generic Access Token’ menu, enter your Bitly password, copy the access token
  3. In to the Script Editor copy the code below adding your access token where indicated:


function oneTimeSetup() {
UrlShortener.Util.setToken('YOUR_GENERIC_ACCESS_TOKEN_HERE');
var grp = UrlShortener.Groups.list();
UrlShortener.Util.setGUID(grp.groups[0].guid);
}


  1. Save your script project and then Run > oneTimeSetup
  2. After authenticating the script should store both the access token and GROUP_GUID
  3. You can test by running the following function in the Script Editor and checking the logger for the result:


function testShorten(){
var url = UrlShortener.Url.insert({
longUrl: 'https://tu.appsscript.info'
});
var short_url = url.id;
Logger.log(short_url);
}


Once you’ve completed the steps above you can delete the script project as it’s no longer required. To use the UrlShortener(Bitly version) in projects where you were using the original UrlShortener advanced service open these projects and remove it as an advanced service and add the library as covered in step 1. If your project doesn’t already connect to an external service via UrlFetchApp you might to to test run in case any additional permissions are required.



Summary



The shim we have developed is designed to include only the API endpoints we require to replace the functionality in our existing code and anyone is welcome to extend this to cover more of the Bitly API as required (here is the source code with appropriate GPL licence)




In this blog post I’m going to show you how you can use Google Dialogflow in your Google Hangouts Chat bots created with Google Apps Script. Dialogflow, formerly api.ai, is a tool for for building conversational interfaces. I’m assuming you have already started creating Hangouts Chat bots. As part of this you would have started to work out how to respond to text message events from your bot. The messages object contains lots of useful information about the sender, space, thread and so on, but interpreting the intent of the text message can be challenging.


For example, in the Hangouts Chat bot with Apps Script codelab where you create a bot to work out if the user wants to record a holiday or sick day in their calendar .indexOf() is used to detect a keyword:


 // If the user said that they were 'sick', adjust the image in the
// header sent in response.
if (userMessage.indexOf('sick') > -1) {
// Hospital material icon
HEADER.header.imageUrl = 'https://goo.gl/mnZ37b';
reason = REASON.SICK;
} else if (userMessage.indexOf('vacation') > -1) {
// Spa material icon
HEADER.header.imageUrl = 'https://goo.gl/EbgHuc';
}

In this example if the message text had ‘holiday’ instead of ‘vacation’ the user’s intent would be lost. You could of course start adding further conditions or regular expression matching, but with Dialogflow there is an opportunity to apply a little AI and come up with a more scalable solution.


To show you the basics of Dialogflow in Google Apps Script I’ll extend the Hangouts Chat bot codelab to extract and use the intent of the user’s message. As part of this I’ll show how you can extract information from a message like when an event should occur and duration. All of this will also be wrapped in a conversational interface, so if the user forgets to include some information they will be prompted to provide more details.


Table of Contents



What you will learn



  • How to setup a basic Dialogflow Agent

  • How to setup a Google Apps Script project to interact with a Dialogflow Agent

  • How to handle responses from a Dialogflow Agent in Google Apps Script


What you’ll need



  • Access to the internet and a web browser.

  • A G Suite account.

  • Basic JavaScript skills—Google Apps Script only supports JavaScript.


Note: While you must use a G Suite account to interact with the Hangouts Chat bot if you are interested in creating and using a Dialogflow Agent in your Apps Script project for other purposes you can also use a @gmail.com account.


Getting the sample code


The final code for this project including a zip of the Dialogflow agent can either be cloned or downloaded from the following GitHub repository.


Click the following link to download all the code used in this post:


Download source code


Cloning the GitHub repository


To clone the GitHub repository for this codelab, run the following command:


git clone https://github.com/mhawksey/Hangouts-Chat-bot-with-Dialogflow.git


Setting up our Dialogflow agent


Dialogflow have a build your first agent guide, which will get you setup with an account (there is a free standard edition you can use). After you have created your agent, I called mine ‘Attendance-Bot’, I recommend the following steps:


If you want to skip these steps download the .zip in the code sample and imported to your Dialogflow agent.



  1. Create and setup a new Entity for reason

  2. Create an Intents called attendance and add some Training Phrases

  3. Make the reason a required parameter


Create and setup a new Entity for reason


From the Entity menu on the left hand side create a new entity called reason and added the reference values vacationsicklunch, and outofoffice. For each reference value you can add synonyms which allows our agent to interpret alternative ways of saying things like vacation:



Creating a new entity


Create an Intents called attendance and add some Training Phrases


From the Intents menu on the left hand side create a new intent called attendance and start adding some Training Phrases:



Creating training phrases


As you add training phrases Dialogflow will start identify and highlight built-in entities like time, date and duration as well as the custom reason entity we created in the previous step.


Make reason a required parameter


Once you have some phrases a list of Action and parameters is automatically created. In this example we want to make sure reason is identified as a required parameter:



Action parameters created from the training phrases


Setting reason as a required parameter allows us to define some prompts if the reason for the absence is not included in the initial message to our bot. You can continue to refine the phrases and prompts you use.


The next steps will show you how to setup your Google Apps Script project to interact with your agent.


Setting up Google Apps Script for interaction with Dialogflow


To use your Dialogflow agent in your Apps Script project there is some setup required. The steps below can be used for any Apps Script project that you want to interact with a Dialogflow agent. If you are wanting to use Dialogflow as a continuation to the Hangouts Chat bot with Apps Script codelab you should complete the steps below in your existing apps script project.


To use the Dialogflow API I’ll be using two client libraries:



  • cGoa – library developed by Bruce Mcpherson to handle OAuth 2 calls

  • Dataflow – a library I’ve created for Dialogflow API calls (generated from the Google Discovery Service with the Generator-SRC script created by Spencer Easton – read about it)


Note: These instructions are designed for V2 of the Dialogflow API which requires you to use a Service Account:



  1. Follow the Dialogflow documentation on Getting the Service Account key (Important: when adding Roles to your service account you need to add at least Dialogflow API Client to allow interaction with the intent endpoint).

  2. Upload your have downloaded your JSON key file to Google Drive (this will be temporary while we configure the OAuth 2 setup).

  3. In your script project click on Resources > Libraries… and in the ‘Add a library’ field add the following libraries selecting the latest version:

    1. MZx5DzNPsYjVyZaR67xXJQai_d-phDA33 – cGoa

    2. 1G621Wm91ATQwuKtETmIr0H39UeqSXEBofL7m2AXwEkm3UypYmOuWKdCx – Dialogflow



  4. In your script project add the following code replacing NAME_OF_YOUR_JSON_KEY_FILE with the name of the file uploaded to Drive in step 2:


function oneOffSetting() { 
var file = DriveApp.getFilesByName('NAME_OF_YOUR_JSON_KEY_FILE.json').next();
// used by all using this script
var propertyStore = PropertiesService.getScriptProperties();
// service account for our Dialogflow agent
cGoa.GoaApp.setPackage (propertyStore ,
cGoa.GoaApp.createServiceAccount (DriveApp , {
packageName: 'dialogflow_serviceaccount',
fileId: file.getId(),
scopes : cGoa.GoaApp.scopesGoogleExpand (['cloud-platform']),
service:'google_service'
}));
}


  1. In the script editor run oneOffSetting(). Once the function has executed you can delete the function and the JSON file from Google Drive


At this point your Google Apps Script project is ready to make calls to your Dialogflow agent. To use the Dialogflow client library we need to get our access token which is set with .setTokenService(). The following code demonstrates how you can prepare a .projectsAgentSessionsDetectIntent() call by passing a TextInput as part of a queryInput request object:


Note: In the code below you need to replace YOUR_DIALOGFLOW_PROJECT_ID with the your Dialogflow Project ID found by clicking the setting cog in the Dialogflow console window:



Dinding your Dialogflow project ID


/**
* Detect message intent from Dialogflow Agent.
* @param {String} message to find intent
* @param {String} optLang optional language code
* @return {object} JSON-formatted response
*/
function detectMessageIntent(message, optLang){
// setting up calls to Dialogflow with Goa
var goa = cGoa.GoaApp.createGoa ('dialogflow_serviceaccount',
PropertiesService.getScriptProperties()).execute ();
if (!goa.hasToken()) {
throw 'something went wrong with goa - no token for calls';
}
// set our token
Dialogflow.setTokenService(function(){ return goa.getToken(); } );

/* Preparing the Dialogflow.projects.agent.sessions.detectIntent call
* https://cloud.google.com/dialogflow-enterprise/docs/reference/rest/v2/projects.agent.sessions/detectIntent
*
* Building a queryInput request object https://cloud.google.com/dialogflow-enterprise/docs/reference/rest/v2/projects.agent.sessions/detectIntent#QueryInput
* with a TextInput https://cloud.google.com/dialogflow-enterprise/docs/reference/rest/v2/projects.agent.sessions/detectIntent#textinput
*/
var requestResource = {
"queryInput": {
"text": {
"text": message,
"languageCode": optLang || "en"
}
},

"queryParams": {

"timeZone": Session.getScriptTimeZone() // using script timezone but you may want to handle as a user setting

}
};

/* Dialogflow.projectsAgentSessionsDetectIntent
* @param {string} session Required. The name of the session this query is sent to. Format:`projects//agent/sessions/`.
* up to the APIcaller to choose an appropriate session ID. It can be a random number orsome type of user identifier (preferably hashed)
* In this example I'm using for the
*/
// your Dialogflow project ID
var PROJECT_ID = 'YOUR_DIALOGFLOW_PROJECT_ID'; // <- your Dialogflow proejct ID

// using an URI encoded ActiveUserKey (non identifiable) https://developers.google.com/apps-script/reference/base/session#getTemporaryActiveUserKey()
var SESSION_ID = encodeURIComponent(Session.getTemporaryActiveUserKey());

var session = 'projects/'+PROJECT_ID+'/agent/sessions/'+SESSION_ID; //
var options = {};
var intent = Dialogflow.projectsAgentSessionsDetectIntent(session, requestResource, options);
return intent;
}

Using Dialogflow to respond to message events


The following code can be used in your Hangouts Chat bot with Apps Script codelab instead of the existing REASON and onMessage function. As part of this the REASON object has been extended to include text variations and icons. In the onMessage the users message is sent to the Dialogflow agent by calling detectMessageIntent(userMessage) and the detected entity parameter values are returned. If a reason is detected this is used to create the widget for setting events in Gmail and Calendar. As part of this the detected entity parameters are stored in the widget textButton.onClick.action.parameters for use later. If no reason is detected by the Dialogflow agent a set of buttons are displayed to the user to select an option. For each button any entity parameters detected by the Dialogflow agent are also included for use later:



Card interface with reason buttons


// new REASON object that has options for text title, inline text and icons 
var REASON = {
'vacation': {title: 'Annual leave', inlineText: 'annual leave', imageUrl: 'https://goo.gl/EbgHuc' }, // Spa material icon
'sick': {title: 'Off sick', inlineText: 'sick leave', imageUrl: 'https://goo.gl/mnZ37b'}, // Hospital material icon
'lunch': {title: 'Lunch', inlineText: 'a lunch break', imageUrl: 'https://goo.gl/zEhek7'}, // Dining material icon
'outofoffice': {title: 'Out of office', inlineText: 'an out-of-office', imageUrl: 'https://goo.gl/aXtqPZ'} // Event busy material icon
};

/**
* Responds to a MESSAGE event triggered in Hangouts Chat.
* @param {object} event the event object from Hangouts Chat
* @return {object} JSON-formatted response
*/
function onMessage(event) {
console.info(event);
var name = event.user.displayName;
var userMessage = event.message.text;

// detect intent of the message
var intent = detectMessageIntent(userMessage);
var intentParams = intent.queryResult.parameters;

// if we have a reason show the Calendar and Gmail Out-of-Office buttons
if (intentParams.reason) {
var reason = intentParams.reason;
var widgets = createAddSetWidget(name, reason, intentParams);
} else {
// no reason detected so prompt user to select using agent prompt
var fulfillmentMessages = intent.queryResult.fulfillmentMessages[0].text.text[0];
// build a set of buttons based on REASON
var reasonButtonObject = Object.keys(REASON).map(function (idx) {
intentParams.reason = idx;
return {
textButton: {
text: 'Set ' + REASON[idx].title,
onClick: {
action: {
actionMethodName: 'reasonButtons',
parameters: [{key: 'entities', value: JSON.stringify(intentParams)}]
}
}
}
}
});
var widgets = [{
textParagraph: {
text: 'Hello, ' + name + '.
' + fulfillmentMessages
}
}, {
buttons: reasonButtonObject
}];
}
return createCardResponse(widgets);
}
/**
* Create a card for setting events in Gmail or Calendar.
* @param {string} name of the person adding the event
* @param {string} reason of the event
* @param {object} intentParams that contain any Dialogflow detected entities
* @return {object} JSON-formatted response
*/
function createAddSetWidget(name, reason, intentParams) {
// adjust the image and card subtitle based on reason
HEADER.header.imageUrl = REASON[reason].imageUrl;
HEADER.header.subtitle = 'Log your ' + REASON[reason].inlineText;

// extract date objects from intent parameters returned by Dialogflow agent
var dates = calcDateObject(intentParams);

// build the Gmail/Calendar widget
var widgets = [{
textParagraph: {
text: 'Hello, ' + name + '.
It looks like you want to add ' + REASON[reason].inlineText + ' ' + dateRangeToString(dates) + '?'
}
}, {
buttons: [{
textButton: {
text: 'Set ' + REASON[reason].inlineText + ' in Gmail',
onClick: {
action: {
actionMethodName: 'turnOnAutoResponder',
parameters: [{key: 'entities', value: JSON.stringify(intentParams)}]
}
}
}
}, {
textButton: {
text: 'Add ' + REASON[reason].inlineText + ' in Calendar',
onClick: {
action: {
actionMethodName: 'blockOutCalendar',
parameters: [{key: 'entities', value: JSON.stringify(intentParams)}]
}
}
}
}]
}];
return widgets;
}

There are also some modifications to the CARD_CLICKED event to handle the reason buttons and to pass in any entity parameters from the original user message.


/**
* Responds to a CARD_CLICKED event triggered in Hangouts Chat.
* @param {object} event the event object from Hangouts Chat
* @return {object} JSON-formatted response
* @see https://developers.google.com/hangouts/chat/reference/message-formats/events
*/
function onCardClick(event) {
console.info(event);
var intentParams = JSON.parse(event.action.parameters[0].value)
var message = "I'm sorry; I'm not sure which button you clicked.";
if (event.action.actionMethodName == 'turnOnAutoResponder') {
return { text: turnOnAutoResponder(intentParams)};
} else if (event.action.actionMethodName == 'blockOutCalendar') {
return { text: blockOutCalendar(intentParams)};
} else if (event.action.actionMethodName == 'reasonButtons') {
// now we know the reason we can show the Gmail/Calendar card which includes existing parameters
// returned from the Dialogflow agent
var widgets = createAddSetWidget(event.user.displayName, intentParams.reason, intentParams);
return createCardResponse(widgets);
}
return { text: message };
}

The script also includes some changes to the turnOnAutoResponder() and blockOutCalendar() functions to enable the dates/times identified by the Dialogflow agent to be used when creating events or Gmail out-of-office settings:


/**
* Turns on the user's vacation response for today in Gmail.
* @param {object} intentParams detected by Dialogflow agent
* @return {string} message
*/
function turnOnAutoResponder(intentParams) {
var dates = calcDateObject(intentParams);
var title = REASON[intentParams.reason].title;
var inlineText = REASON[intentParams.reason].inlineText;
Gmail.Users.Settings.updateVacation({
enableAutoReply: true,
responseSubject: title,
responseBodyHtml: "I'm on " + inlineText + " between " + dateRangeToString(dates) + ".

Created by Attendance Bot!",
restrictToContacts: true,
restrictToDomain: true,
startTime: dates.startDate.getTime(),
endTime: dates.endDate.getTime()
}, 'me');
var message = "Added " + inlineText + " to Gmail for " + dateRangeToString(dates);
return message;
}

/**
* Places an all-day meeting on the user's Calendar.
* @param {object} intentParams detected by Dialogflow agent
* @return {string} message
*/
function blockOutCalendar(intentParams) {
var dates = calcDateObject(intentParams);
var title = REASON[intentParams.reason].title;
var inlineText = REASON[intentParams.reason].inlineText;
var options = {description:"I'm on " + inlineText + " between " + dateRangeToString(dates) + ".

Created by Attendance Bot!"};
if (intentParams.reason == 'lunch' || intentParams.reason == 'outofoffice'){
CalendarApp.createEvent(title, dates.startDate, dates.endDate, options);
} else {
CalendarApp.createAllDayEvent(title, dates.startDate, dates.endDate, options);
}
var message = "Added " + inlineText + " to Calendar for " + dateRangeToString(dates);
return message;
}

A challenge with working with Dialogflow agents that can detect different date/time entities including durations and periods is calculating the actual start and end date. For example a phrase like ‘Going for lunch at 1pm for 1 hour’ in my agent has the following parameter values:





























PARAMETER VALUE
reason lunch
time 2018-10-06T13:00:00+01:00
date-period  
date  
duration {"unit":"h","amount":1}

Using a slightly different phrase ‘Going for lunch from 1pm until 2pm’ can return:

































PARAMETER VALUE
reason lunch
time  
date  
time-period {"endTime":"2018-10-06T14:00:00+01:00","startTime":"2018-10-06T13:00:00+01:00"}
duration  
date-period  

I say ‘can’ because the magic of Dialogflow is as you provide more training phrases it gets better at detecting entity parameters when phrased differently. To convert the returned entity parameters I used the following:


var ONE_DAY_MILLIS = 24 * 60 * 60 * 1000;
/**
* Returns a reformatted object array.
* @param {object} entities returned by Dialogflow agent
* @return {object} of calculated dates
*/
function calcDateObject(entities){
var dates = {};
// easy one - entities for date period
if (entities['date-period']){
dates.startDate = new Date(entities['date-period'].startDate);
dates.endDate = new Date(entities['date-period'].endDate);
return dates;
}
// if no date period construct one
if (entities['date']){
dates.startDate = new Date(entities['date']);
} else {
dates.startDate = new Date();
}
if (entities['time']){
var time = new Date(entities['time']);
} else {
var time = new Date();
}
dates.startDate.setHours(time.getHours(), time.getMinutes());

if (entities['reason'] == 'sick'){
// if sick default to day
dates.endDate = new Date(dates.startDate.getTime() + ONE_DAY_MILLIS);
} else {
// default to 30 mins
dates.endDate = new Date(dates.startDate.getTime() + 30*60000);
}
if (entities['duration']){
switch (entities['duration'].unit){
case 'mo':
dates.endDate = new Date(new Date().setMonth(dates.startDate.getMonth()+entities['duration'].amount));
break;
case 'wk':
dates.endDate = new Date(dates.startDate.getTime() + entities['duration'].amount*7*ONE_DAY_MILLIS);
break;
case 'day':
dates.endDate = new Date(dates.startDate.getTime() + entities['duration'].amount*ONE_DAY_MILLIS);
break;
case 'h':
dates.endDate = new Date(dates.startDate.getTime() + entities['duration'].amount*60*60000);
break;
case 'm':
dates.endDate = new Date(dates.startDate.getTime() + entities['duration'].amount*60000);
break;
default:
throw "Can't handle duration";
break;
}
}
return dates;
}

Finally a function to convert the entities object back from the button parameters and to provide a formatted text response e.g. It looks like you want to add a lunch break Tue 9 Oct 12:00 PM until Tue 9 Oct 12:30 PM?


/**
* Returns a date range string.
* @param {object} dates to turn into human readable format
* @return {string} of date range
*/
function dateRangeToString(dates){
var tz = Session.getScriptTimeZone();
var format = "EEE d MMM h:mm a";
return Utilities.formatDate(dates.startDate, tz, format) + " until "+
Utilities.formatDate(dates.endDate, tz, format);
}

That should be all the extra code you need and if you’re bot is deployed on the HEAD when you save your code it should be ready for testing. If you are having problems a reminder that the complete project is available on Github and instructions on deployment are in the codelab. When you test your bot the response cards will look slightly different to the codelab:



Example interaction with the completed attendance bot


Summary


Hopefully this post has helped you to get started with Dialogflow in Google Apps Script projects. Whilst I’ve focused on Google Hangouts Chat bot integration there is nothing stopping you from using conversational interfaces in your other projects. In this post I’ve only focused on the detectIntent endpoint but there the other API methods you might want to use. For example, you could use Google Sheets/Forms as a way of collecting data to create new intents for your Dialogflow agent (see projects.agent.intents.create).


If developing Dialogflow agents further in Apps Script you might want to consider how your projects are structured. In this example I’ve kept two seperate Cloud Console projects, one associated with the Dialogflow project and the other with the Apps Script project. For better maintenance you might want to consider moving the Apps Script console project into the Dialogflow console project (see Switch to a different Google Cloud Platform project).


One final tip before developing Dialogflow agents is to also check out the Hangouts Chat design guidelines prepared by Google which provide tips on creating bots with the user experience in mind.






This post continues the series on openly sharing our approach to leading a virtual team – a joint project with Maren Deepwell (cross-posted here) for which we write a monthly blog post.



October



This month we discuss ways in which we could develop our approach to virtual team leadership including dealing with critical incidents when you have a distributed workforce. Rather than sharing what we found works, we open up some of the questions we have and consider how we might find solutions that support sustainable development for a small virtual team.



Maren: We received some really thoughtful comments in response to our last post which touched both on the discipline it takes to work from home (your own and everyone who lives with you) and also the challenges of leading a virtual team. I’ve been thinking about how we might develop our approach to that kind of leadership and there are a couple of ideas I’d like to explore: first, I’m wondering if we should open up leading team meetings. As I meet with everyone one to one my way of leading online meetings dominates in our team and more diversity might be a good thing. Also, I’m curious about tools like Jamboard we recently looked at, a virtual notice board, to mix up how we work synchronously. Different tools may also open up new ways of interacting with each other as a group.





Last, inspired in part by this tweet I’m curious about how our team would respond to trying out new things, now that we have nearly a year of virtual working under our belt.



Martin: Having had the experience of leading team meetings in the past the proposition of having to do this on a regular basis isn’t one that personally appeals to me. In part I think it is because there are some subtle differences with leading virtual team meetings compared to when you are face-to-face. For example, as often everyone is staring at a monitor the temptation to check on the various popup notifications increases, plus with virtual meetings I think there is a tendency, because you are not physically co-present, to feel that when you are not speaking no one is watching you. I find I often have to remind myself to pay attention and not get distracted, something I don’t think happens as much when meeting the team face-to-face. Consequently, I feel you need a strong individual to lead virtual meetings, someone who is skillful in keeping everyone’s focus and energy high, something you have in abundance. This is something perhaps Paul Hollins was alluding to in his comment on our last post when he mentioned that team directorship is a challenging area.  Would you agree different qualities are required to lead virtual team meetings compared to face-to-face?



Maren: There are definitely differences between face to face, blended and fully online meetings or webinars, and additional skills that need to be developed incl. technical capabilities. I, for example, learnt a lot from seeing many different people lead meetings and facilitate webinars – but I also gained experience in different contexts that helped me build skills and confidence. It’s important to see different people’s approaches in order to find one that works, but there are some commonalities: listening, giving everyone the chance to participate/speak, keeping to time, preparing an agenda, being clear about the purpose of the meeting and its outcome. You describe traps that we can all fall into, a temptation to be passive or distracted, to rely on others’ momentum. Having seen plenty of people doodle, eat, doze or lurk their way through meetings whilst sitting around a table I think that particular aspect of communicating or working together is always challenging.



But coming back to Paul’s comment, I’ve reflected on the challenge of managing crisis recently, something that came up in a series of posts I’ve written with my mentor, Margaret. Margaret commented on how using different ways of working together affected the quality of our interaction. For example, we would plan strategy face to face, work on procedures in shared docs or speak on the phone in an emergency. One of the biggest challenges in managing a distributed team lies in building confidence in managing a crisis and to continue to communicate in an emergency whether that’s staff illness, systems failures or external issues. It takes time and, unfortunately, experience to build trust in ways of managing a crisis when meeting in person isn’t an option. Initially, I found it really difficult that my Line Manager or mentor were a remote presence only. Sometimes I still do. On the other hand, that perspective helped me identify and develop the skills I need to provide support or manage an emergency. You and I have a lot of experience in managing different types of emergencies, but scaling that up to a bigger scale, to work for everyone in the organisation is a continuous learning process.



Martin: It’s interesting to reflect on critical incidents and whether being a virtual team hindered our response. An incident that immediately comes to mind was last year’s ALT Online Winter Conference when the European servers for the webinar platform we use went down. My experience was the combination of chat and video Hangouts worked well and don’t think hindered our response in any way. Fortunately we also had a great support from our webinar provider which resulted in minimal impact. In some ways I think where it gets harder are moments where you are tackling a slow burn rather than an immediate crisis, like a server failure.  In these situations the danger is they are not perceived by everyone as critical but can quickly escalate to be critical if not addressed. This is where continued communication becomes essential and like you I think it can be easily forgotten particularly in a distributed environment. This is perhaps where online tools can help. At the beginning of this post you mentioned we’d been looking at Google’s collaborative meeting tool Jamboard. This basically gives you a virtual whiteboard you can collaboratively contribute to. As someone who likes using post-it notes a virtual place for sticking these is immediately attractive. I can see Jamboard and tools that have similar functionality as a way to avoid slow burn incidents, providing a way for everyone in the team to get an overview of useful information. Creating workflows or using tools everyone is happy with is always a challenge, not just for distributed teams, and creating a culture of continuous learning is very important.



Maren: I agree with that. On the one hand it’s important to manage change by providing some continuity, like some of the strategic and operational planning tools we use and in a year full of change keeping some things the same has been a necessity. On the other hand, we’ve learnt a huge amount in the transition to operating as a virtual organisation and it makes sense that we learn from that and try out new ways of doing things. We’re already learning a lot from an interim virtual audit, and improving our new financial and payroll procedures and checklists. You point to some advantages collaboration tools could give us when it comes to managing a crisis, and there could be other upsides such as simplifying communication, creating an easy to access overview of progress and giving greater support for our team outside of meetings. I feel the additional support structure a new tool or maybe a new way of using an existing tool could provide would help give us confidence in the long run and support more learning and agile working. I’d really like to find a way to incorporate a stronger sense of progression into our weekly team meeting notes and I’d like to see whether our operations plan could become more practical day to day. I’d like a better overview of areas I don’t have active involvement in. What’s on your wish list of things to try?



Martin: I came across an interesting series of posts by Zapier in which they’ve documented the tools they recommend for remote teams as they have grown from 6 to 20 to over 110 employees. The first two posts aren’t date stamped but I’m guessing they hit 6 employees in 2013 and 20 in 2015, and the over 110 was posted in 2017. Some tools that were mentioned in 2013 that caught my eye were iDoneThis and Sqwiggle. With iDoneThis “everybody on the team checks in daily; either in their browser or via email”, which is turned into a daily digest or analysed in a report. I can see the benefit of such a system but fear, depending on how it was implemented, it might be perceived as too draconian. Another service that was mentioned that got my attention for different reasons was Sqwiggle: “Sqwiggle is a persistent video chat room, but instead of having a live video feed on all the time like you might do with Skype or Google Hangouts, Sqwiggle takes a picture of you every 8 seconds”. We’ve previously talked about the importance of trust within remote teams and whilst I can see why people might like Sqwiggle to me it appears like surveillance technology and a shortcut to eroding trust. Sqwiggle was closed in 2016 but both Sqwiggle and iDoneThis didn’t appear in Zapier’s posts in 2015 and 2017. One tool that appears in all of Zapier’s posts is the project management service Trello. Trello is a tool I often hear about in the developer community and there are lots of posts and resources that promote it as a tool to support remote teams, including Trello’s own Trello for Remote Teams. Having had a quick look a Trello an immediate thought is can we replicate it with any of the existing tools we use like Google Keep. Ultimately I think it comes back to one of our core principles, the appropriate use of technology. Unpacking what it is we want to achieve will go a long way in helping us decide how we continue to develop remote teamwork at ALT.     



Other things we’ve been reading:









This post continues the series on openly sharing our approach to leading a virtual team – a joint project with Maren Deepwell (cross-posted here) for which we write a monthly blog post.


November


This month we discuss checklists, how each of our staff invented their own scale to rate their week and treating others with kindness.


Maren: I’ve been thinking about how to make time for both urgent and important things, and at the same time to reserve enough space for impromptu collaboration. This article on using G Suite to improve team performance via intermittent interaction for example argues that agile communication and collaboration is more effective than regularly scheduled interactions – but it’s much harder to do well in my experience. Following on from last month’s post in which we discussed trello and jamboard one of the things we have since implemented are some new checklists, for areas like payroll, GDPR and tech maintenance that cover regular, important tasks for the team, but aren’t related to urgent deadlines. I’m a big fan of checklists (and their history in aviation and healthcare in particular) and find them very effective in a team like ours and in particular for things that we don’t work on every day. We complete the checklist individually but we have a prompt at team meetings and then review the results together, which gives us space to raise questions. For me, another upside is that I spent less time on something that’s routine, if important, freeing me up for other things. As a recent development in our approach to leading the team, what are your thoughts on this?


Martin: An aspect of checklists, particularly if they are shared, I hadn’t really considered until recently was how they can be used to ‘nudge’ team members. Nudge theory was something I first heard about many years ago talking to a contractor who worked with the UK Government, in part supporting the British Cabinet Office Behavioural Insights Team, also known as the “Nudge Unit”. It was only recently at the Scottish ELESIG that I first heard it being referencenced in a learning and teaching context. This also got me thinking about nudges in distributed teams. I’ve no experience in behavioural science and I should also say I can see the dangers of going down a ‘nudge management’ route, but I can also see the value in exploring some aspects of ‘nudges’. An interesting paper I came across was Nudge management: applying behavioural science to increase knowledge worker productivity (Ebert and Freibichler, 2017). This paper highlights a number of nudge tactics and it’s interesting to see that things like quarterly reporting of goals/milestones is something we already do. I can also see parallels with some of the examples reported and the G Suite article and models of collaboration you mentioned earlier. It is unlikely we will get a Google style micro kitchen and I always wonder if there is more we can do to create opportunities for the informal exploration of ideas. Back to checklists I see one advantage of using these is it becomes easier for us to see who has and hasn’t completed activities … nudge, nudge, wink, wink say no more.


Maren: I don’t like the term nudge management much, but it’s a very useful idea and so is the article once I got past the terminology. One benefit of using approaches like this in a small team rather than in a huge corporation like Google is being able to take an agile, informal approach to reiterating checklists: for example when a colleague added something new to their own checklist this week, we identified this as a gap and added it to the template for everyone. Two minutes work resulted in an immediate improvement. Similarly, being able to see an overview of everyone completing a monthly GDPR checklist makes the process more transparent and shows that everyone is participating. It improves communication in a very time effective way. Being a small team we can sometimes take a more playful approach as this week’s team meeting demonstrated. We often incorporate a check-in into team meetings, where we quickly and informally share how we are doing and how busy we are. That started as rating your week out of 10 (10 = the busiest) and has gradually turned into everyone rating their week on their own scale, be that a colour or phrase or similar metaphor. With a small team, that works because it still fulfills the purpose of the exercise: taking a step back and checking in with yourself, sharing that with everyone without being competitive and giving us a better sense of how everyone is doing. How individuals choose to rate their week, what scale they apply, also tells you a lot: it reflects their personality or mood, how they integrate with the team, how much of a sense they have of how they’re doing… it helps to fill the gaps formal reporting or catch ups can’t. We have established such a strong foundation in the team meetings process that we can now have more freedom to ‘personalise’ parts of it, like the check-ins. The way I see it, the better the structures we as a team have are and the more we trust them, the more freedom we have. I’m really interested in the relationship between those structures and the agility they can bring. Does that make sense to you?


Martin: Being able to express ourselves is a good thing, I’ve never considered if it was more or less important for distributed teams. I think it certainly helps when there is work pressure on and the degree of informality perhaps provides a foundation in creating a culture for the ‘informal exploration of ideas’. I can also see the benefit of moving to a more abstract and personal scale. I think one issue with a 1 to 10 scale is it can potentially become a little divisive, particularly within a virtual team context. For example, Bob says he’s a 9, Margaret thinks ‘how is Bob a 9!?’, in part because in a distributed setting it’s hard to see how hard Bob is working. Equally John might feel like a 6, but at the same time feels guilty that Bob is a 9. With everyone using their personal scale it becomes less quantifiable and more of a personal reflection which is less open to judgement. Using scales hopefully also creates opportunities for the ‘humble brag’. I know this month you’ve shared a post on the  ‘Virtual no Distant’ blog on sharing success. In this post it highlights, perhaps on questionable research, the increased importance of sharing when things go well. As someone who usually has to be reminded to share my achievements I can see the benefits of this approach. In particular, where I see this fitting in is you might be ‘code red’, but that because the thing you were working on has been a huge success, the downside being it’s created more work. How do you feel about the ‘humble brag’?       


Maren: Hmmmm. I’ve written posts like How to share credit and praise yourself… reflecting on the value of (deserved) recognition and Don’t think you are brilliant? Think again… – so I have given this quite a lot of thought. Alongside terms like ‘imposter’ or ‘lurking’ I have ambivalent feelings about ‘humble bragging’ (i.e. to make an ostensibly modest or self-deprecating statement with the actual intention of drawing attention to something of which one is proud). On the one hand, modesty and humility are good qualities to have. On the other hand, it’s essential to learn how to assess one’s professional achievements and articulate them. Too often, particularly in our cultural context, professionals are unable to do that effectively (and incidentally this is something I come up against frequently as a CMALT assessor). I find that sometimes ‘humble bragging’ is less a reflection of personal modesty and more to do with not really reflecting on progress, or not being able to understand one’s role within a team, one’s contribution and why it is important. Not being able to say ‘I did this…’ or ‘It’s my responsibility to…’ can make it much harder for others to understand what the other person’s role is and to respond accordingly. That’s why a few years ago we introduced 360 degree feedback for everyone as part of our annual appraisal process. It’s a regular opportunity to practice giving and receiving feedback, both good and bad. That said, any form of sharing the highs (and lows) of our work is important and welcome, ‘humble brag’ included! I’m generally very communicative at work and I’m confident enough to share both success and failure. I try to be honest about things that go wrong because I want to show that it does happen and how to deal with it. But I appreciate that this is much harder for some and at times maybe a ‘humble brag’ is the best strategy, maybe the only way to communicate. Our job, leading a team, is to listen and acknowledge that achievement no matter how loudly or quietly it may be voiced. For me, it’s also about treating others with kindness (which is also why I am mentioning our postal Secret Santa, sending a little kindness to each other each December).


Martin: As tempting as it is to put ‘Secret Santa’ under the lens of ‘nudge management’ perhaps we should end here and show dear reader some of our own kindness and wish you all a wonderful holiday season and we look forward to sharing more of ALT’s journey in becoming a virtual team in 2019.


Other things we’ve been reading:






Tourism
Bolivia's National Museum of Archaeology has launched a project to reconstruct skulls from the pre-Columbian Tiahuanaco culture, in order to discover what the faces of people looked like who lived in the Bolivian Andes more than 3,000 years ago, the Culture and Tourism Ministry said. The museum ...
The project, which has become famous in other cities in Latin America like La Paz, Bolivia, will be a game-changing addition for a city in need of more public transport. But, as in cities like La Paz, the cable car system could also become a serious tourist attraction — a way to see the city from a very ...
The project, which uses 3D technology, is driven by the Ministry of Culture and Tourism of Bolivia. The Center of anthropological and archaeological research, administration of Tiwanaku provided these skulls for the investigation, together with other 150. Castedo explained that the procedure is carried ...
He explained that the reconstruction process, which done with the assistance from Bolivia's Ministry of Cultures and Tourism, begins with a chronometric study to take the measures of each skull and facial features. Next, liquid latex (rubber) is applied on the scalp to produce a replica in plaster ceramic ...
NAVARRE BEACH — Sweet Pea, the resident juvenile green sea turtle at the Navarre Beach Sea Turtle Conservation Center, met with some international visitors Thursday afternoon. A handful of people from Bolivia visited the center as part of the U.S. Department of State International Visitor ...
HOLT — A group of Bolivian citizens visited the Blackwater River State Park Friday morning as part of the U.S. Department of State International Visitor Leadership Program. On Thursday, the group visited Gulf Islands National Sea Shore in Navarre. The goal of their trip to the area is to gain insight into ...
Federal Tourism Secretary Enrique de la Madrid said yesterday that the greatest challenge for Mexico's tourist destinations is to reduce crime levels. Los Cabos, La Paz and Cancún have also seen significant spikes in violent crime. Astudillo also said that he was in favor of a proposal put forward by ...
The Bolivian Ministry of Culture and Tourism began today the second version of the ReivindicArte-Bolivia program, an initiative consisting of taking public and ... Alanoca explained that ReivindicArte-Bolivia was born of the need to make people exercise their cultural rights and acquire knowledge and ...
Bolivia's economic profile is more diversified than the Venezuela's oil and gas- centric model and includes mining, tourism, agriculture, and more alongside some of the world's leading commodities. The Bolivian economic structure would benefit greatly from the return of the land in question through the ...
NUEVA VALENCIA, GUIMARAS—The first thing you notice about the coastal village of La Paz here is its unusual cleanliness. Sacks hang from bamboo fences for passersby to drop plastics, cigarette butts and other trash. Soda bottles are but colorful plastics on display but not strewn around.
La Paz's council and residents are to hold a soft launch of their tourism initiative, Himal-us Tours, by the end of April. The tour will start at 4 p.m. with snacks and a visit to Sitio Sumirib, the community center of La Paz. Before dinner, visitors will have several activity choices—check out local craft, recharge ...
The Mexican government's National Commission of Protected Natural Areas (CONANP) outlines and promotes environmental standards for the area; the citizen-run La Paz Waterkeepers prevent illegal fishing and other illicit behaviors; and tour operators like RED are investing in research, promoting ...
Some tours that are featured include the following: Italy's Treasures, Exploring South Africa: Victoria Falls and Botswana, Machu Picchu & The Galapagos Islands, Mysteries of India, Cultures of Peru and Bolivia, Journey through Southern France, Treasures of Egypt, Wonders of China and the Yangtze ...
The Joint European Strategy is dedicated to improving the lives for the Bolivian people in eight priority sectors, including culture and tourism, rural development and food security, integral development with coca and the fight against drug trafficking, education, governance, environment and climate ...
Bolivia, who has taken part in the Rally every year since 2014 has experienced a rise in positive tourism due to the race, but La Razon explained that ...
He pointed out that the Tourism Security and Justice Bureau for La Paz and Los Cabos had specific goals, clear routes and addressed issues in a ...
However, violent crime has affected some of Mexico's most popular tourism hotspots, such as Los Cabos and La Paz in Baja California Sur and ...
The ceremony was attended by Bolivian President Evo Morales, who ... and hydrocarbons, as well as other complementary sectors such as tourism.
SERDANG: The Malaysian Association of Tour & Travel Agents (Matta) Fair ... spot with the same wonder in Bolivia," he said adding that such wonders needed more ... "In 2016, Tourism Malaysia recorded 66 million domestic tourists ...
Despite a surge of violent crimes in popular tourist areas such as Los ... Tourists take pictures on a pier in La Paz, Baja California Sur state, Mexico on ...
SAN BUENAVENTURA, Bolivia – The Tacana people, denizens of Madidi National Park, located in the heart of Bolivia's Amazon region, are betting ...
The project cost US$84 million to carry out and is of importance to Bolivia, as it will improve transport connections and improve trade and tourism in the ...
Skip Becker of the La Paz Economic Development Corporation and Mary Hamilton of the Parker Regional Chamber of Commerce & Tourism ...
Visitor numbers to La Paz and Loreto grew by 12% and 26% respectively last year but the number of hotel rooms only increased by 2% in the former ...
Interview: Bolivian president's China visit to spur bilateral ties: Chinese ... From last year, Bolivia began issuing visas to Chinese tourists on arrival, ...
China hopes to work with Bolivia to seize the new opportunities for the ... as economic and trade, tourism and culture, and increase imports of Bolivia's ...
BEIJING – Bolivian President Evo Morales and Chinese Premier Li Keqiang agreed to boost bilateral cooperation with the aim of increasing imports of ...
Lindblad Expeditions-National Geographic has announced the inaugural voyages for the newest addition to the company's fleet, National Geographic ...
But only recently has the Bolivian Ministry of Tourism decided it will take some measures to protect both the image and the sanctuary, to prevent the ...
Bolivian Vice Tourism Minister Ricardo Cox told Xinhua that the project is expected to directly benefit 13 municipalities around the lake with income ...
British Ambassador to Bolivia James Thornton said the pilot system was installed in La Paz'stourism heart” for the benefit not only of visitors but of ...
Greek Foreign Minister Nikos Kotzias will depart for La Paz, Bolivia, ... Bolivia is hosting the event after Athens first organized it in April, last year, with ...
The forum was sponsored by the Parker Regional Chamber of Commerce and Tourism and was held at the Parker Community/Senior Center. La Paz ...
That seclusion from mainstream tourism is part of what sets Bolivia's peaks apart from other destinations in South America's Andes, which stretch from ...
All the way down the cobbled streets of Potosi in the south of Bolivia were tour companies advertising adventures into the labyrinth of silver mines in ...
The nation expects 4.4m tourists to bring $5bn into its economy this year ... Mosquera is now planning expansion to offer tourism services in Bolivia.
Tessier was sponsored by a French association that provides assistance to the blind, as well as by the Bolivian tourism agency Alma Turismo.
The other contestants entering the top six included representatives from Chile, Hong Kong (China), Thailand, Bolivia and Mongolia. The first runner-up ...
Bolivia, Russia Promote Trade Relations in Several Sectors ... The eve Bolivia and Russia celebrated 120 years of diplomatic relations, begun in 1898; ... tourism, fight against drug trafficking, mining, health, defense and energy, ...
by Rene Quenallata Paredes. LA PAZ, Aug. 20 (Xinhua) -- Community-based tourism has served to bolster cultural identity among Bolivia's ...
"A new trend has emerged out of Bolivia's diverse tourism that coincides with the appearance of a new type of traveler who is seeking to have a ...
LA PAZ (Xinhua) – Community-based tourism has served to bolster cultural identity among Bolivia's indigenous communities, experts said.
A powerful earthquake just hit Peru near the borders with Brazil and Bolivia. No word on damage or injuries. 7.1 magnitude, according to the USGS.
President Evo Morales Inaugurates 8th Cable Car Line in La Paz ... On Thursday, Bolivian President Evo Morales inaugurated the eighth line of the ... cable car also brings us tourism," said the president during a press conference.
La Paz, Aug 5 (Prensa Latina) President Evo Morales said today that Bolivia ... which will directly benefit the people of the zone with trade and tourism.
They are flying to Colombia, Bolivia and Peru to buy the drug for as little as £3 per gram – 20 times cheaper than on the streets of London.
Beginning in January, travellers will have the chance to take advantage of Bolivia's recently-improved tourism infrastructure and experience its ...
La Paz, Sep 17 (Prensa Latina) Bolivia was conferred the award Best Green Destination in South America 2018, by World Travel Awards, in an event ...
Guests will be able to take advantage of Bolivia's efforts to upgrade its tourism infrastructure and its rich cultural heritage. The Peru & Bolivia: Machu ...
ILOILO CITY — A city councilor is pushing for this city to hold a “National La Paz Batchoy” Day to highlight one of the traditional Ilonggo foods, which is ...
Beginning in January, travelers will have the chance to take advantage of Bolivia's recently-improved tourism infrastructure and experience its ...
In Bolivia, the Amazon Jaguar is in a Vulnerable state (VU), according to the ..... Of which, almost 70 percent, or 20,098 people, entered for tourism.
... Hampton Yacht Group of California, Inflatable Boat Specialists, Integrated Marine Systems, La Paz Tourism Board, Los Angeles Maritime Institute, ...
New “Best of” experiential travel programs in Brazil, Peru, Colombia, and ... Costa Rica, Argentina, Panama, Bolivia, the Galapagos Islands and beyond. ... Central Holidays is an award-winning travel brand that was founded in 1972.
La Paz, Oct 5 (Prensa Latina) The 62-day strike in the Chilean port of Arica ... to the Camber of Industry, Commerce, Services and Tourism of Santa Cruz. ... The port of Arica is one of the options that Chile gave Bolivia due to its ...
Mamani said the agreement will enable integrated monitoring for security in their protected natural areas while improving the control over eco-tourism.
... hereabouts and a great base for wine tourism in the surrounding valleys. ... Tarija, at the heart of Bolivia's fast-improving wine region, is a sunny, ...
IN a few days, Micaella Alyssa Muhlach Alvarez will fly to Bolivia to ... Vicente will vie for the Mr. Universe Tourism pageant in Manila in May 2019; ...
The most popular tourist attraction in Bolivia and one of the flattest places on earth, the Uyuni Salt Flat draws tourists eager to feel as though they left ...
LA PAZ, Oct. 22 (Xinhua) -- China's first-ever import fair presents "an ... "We will show videos of Bolivia, its wealth of cultural and tourism attractions, ...
Interview: China's import expo unmissable: Bolivian official ... “We will show videos of Bolivia, its wealth of cultural and tourism attractions, and the ...
La Paz, Oct 24 (Prensa Latina) At least 600 enterprises of nine countries ... Brazil, Ecuador, Paraguay, Italy and Germany, in the manufacturing, tourism, ... while they will be able to buy articles made by Bolivian hands, he added.
G Adventures doubled its U.S. sales to the country in the past two years, and added a new Highlights of Bolivia tour with National Geographic ...
... Lisa Jane Campos of South Africa as Mrs. Tourism World, Martha Torrico of Bolivia as Mrs. Tourism International, Nirma Gooransing of India as Mrs.
According to Deputy Minister of Tourism, Ricardo Cox, quoted by the Bolivian Information Agency, the largest number of visitors comes from ...
Bolivia's Ministry of Cultures and Tourism authorized the dig more than three months ago after a mining project discovered archaeological remains in ...
Archaeologists first unearthed in Bolivia the tomb of the ancient Aymara civilization. ... About it reports the Ministry of culture and tourism of Bolivia.
In Bolivia, a group of archaeologists found a tomb containing dozens of artifacts ... The Ministry of culture and tourism of Bolivia gave permission for the ...
According to Reuters, excavations authorized by the Ministry of culture and tourism of Bolivia. They took more than three months. The result was found ...
Three months ago Bolivia's Ministry of Cultures and Tourism authorized the archaeological dig “after a mining project discovered archaeological ...
According to the Chamber of Industry, Commerce, Services, and Tourism of Santa Cruz (CAINCO), Bolivian businesses import 110,000 containers ...
Town's “partners” report on economic progress and tourism ... Lynda Goldberg of La Paz Regional Tourism said one of the projects they completed in ...
There's a lot to do in La Paz County, especially in the winter months. If you want to find out some of the fun activities and services available to visitors ...
The Southern Explorations team has put its thoughtful, creative touch on each Bolivia trip itinerary, so guests can explore Bolivia as an incredible ...
Councilor Candice Magdalane Tupas, proponent of the resolution, said La Paz batchoy is a “great contribution” to Ilonggo pride and an added tourism ...
Successful Concert by Cuba's Buena Fe Makes Headlines in Bolivia ... Culture and Tourism Minister Wilma Alanoca, Minister of the Presidency ...
Tupas said the declaration of the district of La Paz as the “La Paz Batchoy Capital” is a great contribution to Ilonggo pride and addition to tourist ...
The Iloilo City Council has approved a resolution declaring the district of La Paz as capital of the world-famous batchoy. The resolution, penned by ...
The Miss Tourism International (MTI) 2018 World Final with themed “Glitz and ... The pageant featuring 46 delegates taking part in tourism-related and ... Belarus, Bolivia, Bosnia, Brazil, Bulgaria, Canada, China, Czech Republic, ...
THE district of La Paz is now officially the 'Batchoy Capital' in Iloilo City. ... organize, and provide support for tourism-related activities and programs.
Now it's official: La Paz district in Iloilo City is the capital of the world-famous ... She said La Paz's vendors have written the Department of Tourism ...
Explore La Paz County at the Tourism Expo Wednesday, December 12th from 9am to 3pm at BlueWater Resort & Casino. Admission is free.
José Luis Paz, director of heritage at Bolivia's ministry of culture, says that visitors will be able to see two types of underwater ruins: "ones which ...
José Luis Paz, director of heritage at Bolivia's ministry of culture, says that visitors will be able to see two types of underwater ruins: “ones which ...