This article is part of the On Tech newsletter. You can sign up here to receive it on weekdays.

There’s another Congressional hearing today on an Internet law that predates Google: Section 230 of the Communications Decency Act. Please don’t stop reading.

Chances are the law won’t change. But section 230 is still worth talking about as it is a substitute for big questions: is more language better and who can decide? Shouldn’t we do something about giant internet companies? And who is responsible when bad things that happen online cause people to be injured or even killed?

Let me try to explain what the law is, what it is really about, and what proposals need to be fixed.

What’s Section 230 Again? The 26-word law allows websites to set rules about what people can or cannot post without being held legally responsible for the content (for the most part).

If I accuse you of the murder on Facebook, maybe you can sue me, but you can’t sue Facebook. If you buy a defective toy from a retailer on Amazon, you may be able to take the seller to court, but not Amazon. (There is some legal debate about this, but you get the gist of it.)

The law created the terms for Facebook, Yelp, and Airbnb to give people a voice without getting sued. But now Republicans and Democrats are asking whether the law gives tech companies either too much power or too little responsibility for what happens under their watch.

In general, Republicans fear that Section 230 is giving internet companies too much leeway to suppress what people say online. Democrats believe there is an opportunity for internet companies not to effectively stop illicit drug sales or to prevent extremists from organizing violence.

What is the fight about? Really:: Everything. Our fears are now projected onto these 26 words.

Section 230 is a proxy battle for our unease that Facebook and Twitter have the power to silence the President of the United States or a student who has nowhere else to turn. The struggle for the law reflects our fears that people can lie online with seemingly no consequences. And it’s about the desire to hold people accountable when what’s happening online is causing irreparable damage.

It makes sense to ask whether Section 230 removes the incentives for online businesses to take measures to prevent people from smearing those they don’t like or blocking the channels that make it easier to sell medicines. Likewise, it is reasonable to ask if the real problem is that people want someone, someone – a broken law or an unscrupulous internet company – to be held responsible for the bad things that people do to one another.

One topic at the Congressional hearing on Thursday is the many legislative proposals to amend Section 230, mostly in the margins. My colleague David McCabe helped me categorize the suggestions into two (slightly overlapping) buckets.

Fix-it plan 1: raise the bar. Some lawmakers want online businesses to meet certain conditions before they receive Section 230 legal protection.

For example, a congressional proposal would require internet companies to report to law enforcement if they believe that people might be planning violent crimes or drug offenses. If businesses fail to do so, they may lose Section 230 legal protection and the floodgates can be opened for lawsuits.

Facebook endorsed a similar idea this week that suggested it and other large online companies need to put in place systems to identify and remove potentially illegal material.

Another bill would require Facebook, Google, and others to demonstrate that they did not display political bias in removing a post. Some Republicans say Section 230 requires websites to be politically neutral. That is not true.

Fix-it Plan 2: Create more exceptions. A proposal would prevent internet companies from using Section 230 as a defense in legal cases involving activities such as civil rights abuses, harassment and death. Another suggests getting people to sue internet companies when pictures of child sexual abuse are posted on their websites.

This category also includes legal questions as to whether Section 230 applies to the involvement of the computer systems of an Internet company. When Facebook’s algorithms helped spread Hamas’ propaganda, as David detailed in an article, some legal experts and lawmakers said that Section 230 legal protection should not have applied and that the company should be complicit in acts of terrorism.

(Slate detailed all proposed bills to amend Section 230.)

There is no denying that by connecting the world to the Internet as we know it, it has enabled people to do a lot of good – and cause a lot of damage. The struggle for this law involves a multitude. “It all comes from frustration,” David told me.

  • Amazon’s tricky political balancing act: David’s recent article deals with Amazon’s attempt to stay on the good side of Washington’s Democratic leaders while suppressing a union movement supported by many Democratic politicians. (Also, one of Amazon’s executives had an argument with Senator Bernie Sanders on Twitter.)

  • Math lessons for your child (and you): The Wall Street Journal explains some of the educational apps and services that can help families with math homework, lessons, and tutoring. For example, you can take a photo of a math equation and Photomath will spit out the answer with instructions on how to solve it.

  • It took the Pentagon three weeks to create a bad meme: Vice News has the details about Department of Defense employees making an online visual joke about Russians, malicious software, and maybe Halloween candy? The meme wasn’t funny, it lasted 22 days and it was only retweeted 190 times.

Dolphins! In the East River of New York! That’s funny! (Not that strange, apparently. You can find more details on dolphin sightings in Manhattan here.)

We want to hear from you. Tell us what you think of this newsletter and what else you would like us to explore. You can reach us at ontech@nytimes.com.

If you do not have this newsletter in your inbox yet, please register here.