Further explorations of the TweetsToRss tool

I am using the TweetsToRss tool created by Dave Winer to get a set of tweets for a Twitter user and convert it to a RSS feed (wrote a previous blog post comparing this tool and Granary). When I started using the tool, I saw that it only listed tweets created by the user (no replies). I decided to look at how to add replies as part of the RSS feed.

Twitter has an API reference page for data contained in a user timeline. I decided to review TweetsToRss to see if some of this data was present. In tweetstorss.js, there is an array called “params”, which sets screen_name to the variable username, and trim_user to “false”. I was thinking that I would have to add some additional parameters to this array based on the API reference page. However, upon further review of the source code, there was some logic to not add replies (lines 437-441):

[cc lang=javascript]
if (flSkipReplies) {
if (thisTweet.in_reply_to_status_id != null) { //it’s a reply
flInclude = false;
}
}
[/cc]

The value of flSkipReplies is set to true in line 36. I changed this line to set flSkipReplies to false, and was able to see replies in the RSS feed – yay!

One last thing – I tested having flSkipReplies set to both true and false, and saw that the version of the RSS feed with replies had 20 items, where the version of the RSS feed with no replies had only 10 items. May still be a thing or two to play around with here….

In this article, we’re going to talk about finishing what you start. We’ll talk about:

  • Mindsets that block us
  • Slimming down your project list
  • How to move more to the done column (hint: it’s not “work harder”)

Let’s dig in!

I just linked to a Twitter thread by Sam Julien on this topic – glad to find the ideas described further….

This is a good thread if you want to finish things…

How I am getting things done on Christmas projects

TL/DR version: make a short list, break tasks down, prioritize, be gentle with yourself

Full version:

My company has a holiday shutdown at the end of the year, and I usually take enough vacation so I get a two week break – nice! In the past, I have tried to take advantage of this time to do lots of things, but ended up not completing a lot. This year, I decided to change how I approached this opportunity:

Step 1: Make a short list of projects to accomplish

After reviewing the things I might want to do, I settled on two large tasks that I had not made any progress on in the last month. One was a writing project to document the tools and processes I used to create and publish the Portland Protest News website. I had an outline that was several months old, but had not made time to get this done. The second task was starting to read How to Engineer Software by Steve Tockey. I had watched some videos of him talking about the use of semantic modeling, and was intrigued enough to buy his 1100-page book. However, it has been sitting on my desk for almost a month, and I had only opened it once.

Step 2: Break tasks down

For the writing task, I decided to go for a daily word count of 500 words per day, following Jeff Goins’ Three Bucket System for starting a writing habit. For the reading task, I set a goal of one chapter a day. With these constraints, I hoped that I could fit these in around holiday activities and still feel a sense of accomplishment.

Step 3: Prioritize

I worked to try to do these as soon as possible each day. That did not always work, but knowing that these were my most important items to get done helped me to maintain focus.

Step 4: Be gentle with yourself

If there was a day where I did not get both tasks done (or maybe none!), I decided that was going to be ok. After all, this was a vacation, not a job!

Results:

I have completed my first week of vacation, and managed to write at least 500 words each day. I completed 5 chapters in the book, so there were two days I did not complete my reading task. But that’s ok! I feel good about what I have gotten done, and managed to fit in a few more things (like this blog post).

I hope this is helpful – let me know what you think about my method!

 

Curation as an act of journalism

In September 2020, I started a site called Portland Protest News. Its purpose was to be a daily collection of what I considered to be the best reporting and analysis over the past 24 hours on the subject of the protests in Portland, Oregon. The site focused primarily on local news sources, as I considered those the people closest to the action. However, I included links to posts/articles from other sources if they were bringing out aspects of the protests not covered in the local media.

In creating and running this site, I was not generating any original reporting on events concerning the Portland protests. However, I feel that my activity did fit within Wikipedia’s definition of journalism:

Journalism is the production and distribution of reports on current events based on facts and supported with proof or evidence.

I was filling the “distribution” role in the above definition. However, I was also performing these roles (per Mindy McAdams):

  1. Selection of the best representatives: If a museum curator has access to 10,000 small clay tokens from ancient Iraq and Syria, how many — and which ones — should appear inside the glass case? If a journalist is going to provide links to reliable sources about planning for retirement (or breast cancer, or choosing a college), which are the best, clearest, and most up-to-date?
  2. Culling: How many links is enough, and not too much? If the museum curator puts 100 of those tokens in one case, my eyes will glaze over.
  3. Provide context: Will you include a bit of explanatory text to show me how each source differs from the others? Why am I looking at this one? Where is it from? How old is it? Why is this one significant?

The Online Journalism Blog also touches on curation:

Curation is a relatively new term in journalism, but the practice is as old as journalism itself. Every act of journalism is an act of curation: think of how a news report or feature selects and combines elements from a range of sources (first hand sources, background facts, first or second hand colour). Not only that: every act of publishing is, too: selecting and combining different types of content to ensure a news or content ‘mix’.

Where I am going with this? Here is my point: when someone collects links on a topic, that is an act of journalism. It is a type of journalism that all of us can pursue and make contributions.

The process of democracy

Within the past few weeks, the citizens of the United States of America have had the opportunity to follow the election process across the 50 states and the interaction of the states with the federal government. Reports on these processes from media organizations and social media have been the main source of how people have been able to monitor these processes. However, I had two experiences since Election Day that demonstrated that being at an event or being able to observe an event live can provide much more information than the summary presented from a media organization.

The first instance was a press conference by President-Elect Joe Biden in Wilmington, Delaware. This was carried live by ABC News (where I saw it), and the experience was a breath of fresh air – indeed, a whole tank of fresh air! Kamala Harris gave an address, followed by Joe Biden. These addresses were clear and understandable. Afterwards, Joe Biden took questions from reporters. The phrase that accurately described it was “presidential”. Biden was polite and thoughtful in his replies to questions; he was honest in saying “I don’t know” when he did not know the answer; he was straightforward in explaining how he could not answer some questions as he had not been inaugurated as President, and he responded in a respectful way even when reporters were clearly tying to bait him into responding in a rash way. The experience was one that I have not seen in over four years.

The second instance was a livestream presentation of the meeting of the Michigan Board of State Canvassers held on Monday, November 23rd. I watched the initial part of this meeting via a link from the Detroit Free Press, although I found later there were other links to the Youtube live video. Through watching this livestream, I had a front-row seat to listen to the statements from the board members, testimony from the state elections director, multiple clerks and election officials, and a number of private individuals. I was impressed and proud to hear how many individuals contributed in carrying out the election in Michigan. I was disgusted to see how board member Norman Shinkle treated a number of the people giving testimony. I was impressed with how board member Aaron Van Langevelde asked questions in an attempt to make an informed decision about what actions the board should take. I was pleased with the other two board members, Jeannette Bradshaw and Julie Matuzak, and their responses to the situation. I had to invest several hours to participate in this way, but I received a richness of information that, again, was not available from any other news source.

After these experiences, I was filled with pride in the people who serve our country in the electoral process and the work of the courts in resolving disputes. I was also proud to see first-hand the President and Vice-President that will begin leading our nation in January 2021. Although my participation in this election was solely to vote, I am going to look for ways to get more involved in the political process of my city, state, and country. I encourage all citizens of the United States of America to do the same, to help in healing the divide that affects our nation.

 

A dog’s life

Here is our dog Joey taking his post-barking-at-the-latest-delivery-person nap:

In honor of President-elect Joseph Biden’s victory, I have started calling Joey “Mr. President” at various times during the day.

One of my kids sent me a video of a dog that looks just like Joey (click this Reddit link), Joey would not be able to fit inside a dryer bin.

nodeStorage: Feedback on new migration changes (affects 1999.io)

I wrote a post recently about a 1999.io server that I set up for a friend. That post documented a problem that happened when I initially set up the server. A few weeks ago, he wrote me to say that he was getting 502 Bad Gateway when he tried to access his site. After some review, it appeared that the 502 Bad Gateway was due to the fact that the server process had stopped for some reason. I went to restart the server, but got an error immediately:

Cannot find module ./main.js

After some more review, I saw that the nodeStorage repo had been updated in October (https://github.com/scripting/nodeStorage). This is the backend of the 1999.io blogging tool. I remembered that in the past, some of these apps would try to update themselves periodically. From looking at the installation directory, this appeared to be the case. The update was to create a NPM package for nodeStorage (it appears this was to help with making future updates easier). I agree that the manner of updating 1999.io in the past has been somewhat awkward, so I hope this will improve the environment for future updates. However, I wanted to get my friend’s server working again, so I started trying to figure out how to fix things as quickly as I could.

The description for the new update process (https://github.com/scripting/nodeStorage/blob/master/package.md) had the following steps:

  1. Download the package.
  2. The code you need is in the example folder. Copy the two files into your app’s folder.
  3. Edit app.js to the name of your app, and update package.json accordingly.
  4. You can delete the other files.
  5. At the command line, enter npm install.
  6. You still have to have a config.json file as before.

I began to work through these steps, here are my notes:

  1. Download the package.

Since I did not think my install was ready to do the ‘npm update’ type solution (future state), I instead downloaded a Zip file of the current nodeStorage repo (https://github.com/scripting/nodeStorage/archive/master.zip) and unzipped the file.

2. The code you need is in the example folder. Copy the two files into your app’s folder.

I reviewed the unzipped file and saw that there was a folder called “example” and it contained two files (app.js and package.json). I copied these files into my existing nodeStorage directory.

3. Edit app.js to the name of your app, and update package.json accordingly.

I decided that I wanted to keep the storage.js app the same, so I changed the app name in app.js from “nodestorage” to “storage” (which corresponds to storage.js). I decided to keep the current package.json for now.

4. You can delete the other files.

I decided to delete the npm_modules directory within the nodeStorage directory (due to step 5)

5. At the command line, enter npm install.

I executed this command from the nodeStorage directory

6. You still have to have a config.json file as before.

I had a config.json file, so left it as is.

I then started the server using the command “node storage.js”, and the server appeared to come up correctly. I then stopped it and started again using the command “forever start -a storage.js”

I then repeated this operation with a second 1999.io install, except that I left app.js and package.json from the example folder as-is. I then used the command “node app.js” and saw the server come up correctly. Finally, I stopped it and started again using the command “forever start -a app.js”, with everything working normally.

Conclusions/suggestions:

  1. Recommend changing step 1 to say “getting the nodeStorage repo Zip file” rather than “Download the package”, as this gives the impression that you are supposed to issue some NPM command right at the start.
  2. Be more specific about what files could/should be deleted in step 4. I deleted the node_modules directory, but perhaps other files could be deleted as well.

Another in a series of articles I have seen about COBOL, definitely includes some nice history and examples of current use of COBOL in the US financial industry.