mike_wooskey

joined 1 year ago
[–] [email protected] 1 points 5 months ago

Thanks for explaining that, @[email protected]

[–] [email protected] 1 points 5 months ago (2 children)

Thanks for the help, @[email protected].

I do still have my old server (I'm posting this from it). The new Lemmy server is using a different domain.

[–] [email protected] 2 points 5 months ago (2 children)

Thanks for the assistance, @[email protected].

My new server uses a new domain. I do still have the old data (in fact, the old server is still up - that's where I'm posting this from).

I installed both Lemmy servers via Docker. It would be nice if I could rsync my account data (including post/comment history) from the old server to the new server, but I'm now wondering if my changing domains would make the old account not work at all in the new server.

[–] [email protected] 1 points 5 months ago (1 children)

I see the import/export settings in my new server (0.19.3) but not in my old server (0.18.3). But it sounds like exported account settings don't include post/comment history. Thanks, though, @[email protected].

[–] [email protected] 2 points 5 months ago
 

I host my own Lemmy instance and have a user account on it that I use everywhere (I don't host local communities, I just use it as a home for my Lemmy user account). I needed to re-home my Lemmy server, and though it's a docker installation, copying the /var/lib/docker/volumes/lemmy_* directories to the new installation didn't work. So I created a new Lemmy server.

How can I move my old account to the new server, so I can keep all my subscriptions and post/comment history?

[–] [email protected] 0 points 5 months ago

Congratulations! And thank you.

[–] [email protected] 5 points 6 months ago

I hoy Baikal.myself and sync to it via davx5 on android and via Thunderbird in ubuntu

[–] [email protected] 1 points 6 months ago

I'm embarassed but very pleased that your example also taught me about set_conversation_response! I had been using tts.speak, which meant I had to define a specific media player, which wasn't always I wanted to do. This is great!

[–] [email protected] 1 points 6 months ago (1 children)

That is HUGE! Thank you, @[email protected]! This makes customizing conversations from automations so much more powerful and flexible!

[–] [email protected] 3 points 6 months ago

@[email protected], @[email protected], and @[email protected],

THanks for your help. My main issue ended up being that I was trying to use Let's Encrypt's staging mode, but since staging certs are self-signed, Traefik was not accepting the requests. Also, though I had to switch Traefik's logging level to Info instead of error to see that.

[–] [email protected] 1 points 6 months ago (4 children)

Yes, @[email protected], now knowing that I can use sentence syntax in automations, I have built 1 automation to handle my specific needs. But each trigger is a hardcoded value instead of a "variable". For example, trigger 1 is "sentence = 'what is the date of my birthday'" and I trigger an action conditionally to speak the value of input_date.event_1 because I know that's where I stored the date for "my birthday".

What would be awesome is your 2nd suggestion: passing the name of the input_date helper through to the response with a wildcard. I can't figure out how to do that. I've tried defining and using slots but I just don't understand the syntax. Which file do I define the slots in, and what is the syntax?

18
submitted 6 months ago* (last edited 6 months ago) by [email protected] to c/[email protected]
 

I'm hoping someone can help me figure out what I'm doing wrong.

I have a VM on my local network that has Traefik, 2 apps (whomai and myapp), and wireguard in server mode (let's call this VM "server"). I have another VM on the same network with Traefik and wireguard in client mode (let's call this VM "client").

  • both VMs can can ping each other using their VPN IP addresses
  • wireguard successfully handshakes
  • I have myapp.mydomain.com as a host override on my router so every computer in my house points it to "client"
  • when I run curl -L --header 'Host: myapp.mydomain.com' from the myapp container it successfully returns the myapp page.

But when I browse to http://myapp.mydomain.com I get "Internal Server Error", yet nothing appears in the docker logs for any app (neither traefik container, neither wireguard container, nor the myapp container).

Any suggestions/assistance would be appreciated!

12
submitted 7 months ago* (last edited 7 months ago) by [email protected] to c/[email protected]
 

I have input_text.event_1 where the value is currently "birthday", input_text.event_2 where the value is currently "christmas", input_date.event_1 where the value is currently "1/1/2000", and input_date.event_2 where the value is currently "12/25/2024". How do I configure voice assistant to recognize a phrase like "what's the date of birthday" and returns "1/1/2000"?

I'm guessing there's some combination of templating and "lists", but there are too many variables for me to continue guessing: conversations, intents, sentences, slots, lists, wildcards, yaml files...

I've tried variations of this in multiple files:

language: "en"
intents:
  WhatsTheDateOf:
    - "what's the date of {eventname}"
    data:
      - sentences:
          - "what's the date of {eventname}"
lists:
  eventname:
    wildcard:
      true
      - "{{ states('input_text.event_1') }}"
      - "{{ states('input_text.event_2') }}"

Should it be in conversations.yaml, intent_scripts.yaml, or a file in custom_sentences/en? Or does the "lists" go in one file and "intents" go in another? In the intent, do I need to define my sentence twice?

I'd appreciate any help. I feel like once I see the yaml of a way that works, I'll be able to examine it and understand how to make derivations work in the future.

 

Hi. I self-host gitea in docker and have a few repos, users, keys, etc. I installed forgejo in docker and it runs, so I stopped the container and copied /var/lib/docker/volumes/gitea_data/_data/* to /var/lib/docker/volumes/forgejo_data/_data/, but when I restart the forgejo container, forgejo doesn't show any of my repos, users, keys, etc.

My understanding was the the current version of forgejo is a drop-in replacement for gitea, so I was hoping all gitea resources were saved to its docker volume and would thus be instantly usable by forgejo. Guess not. :(

Does anyone have any experience migrating their gitea instance to forgejo?

16
submitted 8 months ago* (last edited 8 months ago) by [email protected] to c/[email protected]
 

Howdy.

I have the following helpers:

  • input_text.countdown_date_01_name
  • input_datetime.countdown_date_01_date,
  • input_text.countdown_date_02_name
  • input_datetime.countdown_date_02_date
  • I want to add a couple more if I can get this to work

I want to be able to speak "how many days until X", where X is the value of either input_text.countdown_date_01_name or input_text.countdown_date_02_name, and have Home Assistant speak the response "there are Y days until X", where X is the value of either input_text.countdown_date_01_name or input_text.countdown_date_02_name, whichever was spoken.

I know how to determine the number of days until the date that is the value of input_datetime.countdown_date_01_date or input_datetime.countdown_date_02_date. But so far I've been unable to figure out how to configure the sentence/intent so that HA knows which one to retrieve the value of.

In config/conversations.yaml I have:

intents:
  HowManyDaysUntil:
    - "how many days until {countdownname}"

In config/intents/sentences/en/_cmmon.yaml I have:

lists:
  countdownname:
    values:
      - '{{ states("input_text.countdown_date_01_name") }}'
      - '{{ states("input_text.countdown_date_02_name") }}'

In config/intent_scripts.yaml I have:

HowManyDaysUntil:
  action:
    service: automation.trigger
    data:
      entity_id: automation.how_many_days_until_countdown01

(this automation currently is hardocded to calculate and speak the days until input_datetime.countdown_date_01_date)

The values of my helpers are currently:

  • input_text.countdown_date_01_name = "vacation"
  • input_datetime.countdown_date_01_date = "6/1/2024'

When I speak "how many days until vacation" I get Unexpected error during intent recognition.

I'd appreciate your help with this!

 

I have some of the ATOM Echos that HA describes here. They work for voice recognition but the speaker in these tiny boxes is...tiny. It's barely audible when standing right next to the box, and completely inaudible when standing 10 feet away or if there is noise in the room.

Examples of the voice responses I'm talking about are "I'm sorry but I don't understand that" or "The current time is 2:15pm" or "I turned on the lights in the living room."

Is it possible to re-route the voice responses to a different media player? Currently, I have a Google Home Mini in each room that I have an ATOM Echo in. It would be nice if I could somehow determine which ECHO received the voice command, which area that Echo was in (e.g., "living room"), and then re-route the voice response to a media player in that area.

But I have no idea how to do this.

 

I have a robot vaccum that sends an alert to HA when it's done cleaning or when it encounters a problem. How can I intercept or re-route those notificiations? I want to post them to Matrix, which I do have an integration for.

Thanks for assistance.

 

I've had a problem for a year or more, so that's through numerous Home Assistant updates: I have about 15 automations that I've disabled, but they always become enabled again within a few days. I haven't been able to determine a trigger for the re-enabling.

Has anyone else encountered this? Does anyone have a suggestion?

6
Projector suggestions (lemmy.d.thewooskeys.com)
 

I'm trying to find a new projector for my home theater. I don't need high end, but it should be at least 1080p (i.e., 4K isnt necessary). I mention that because of my budget: I'm looking for something around $500, but I might be able to go up to $1000. The other main requirement is that I'm able to turn it on/off via Home Assistant.

Other nice features to have, but nott requirements:

  • Ability to adjust vertical and horizontal keystones (I mount this projector on a ceiling)
  • Decent brightness & contract (it doesn't have to be the brightest on the market, but it shouldn't be the dimmest)
  • HDMI connector (I have a 50 foot HDMI cable now, but if sending data to projectors via wifi is a thing, that would be better)

Thansk for your suggestions.

 

I installed the Auto Backup HACS integration and I have a network storage configured in Home Assistant (FYI, I'm running HAOS). If I use HA's Developer Tools and manually call the "Auto Backup: Backup Full" service, there is a "Location" field where I can select my network storage. The backup successfully completes and saves to my network storage.

But in an Automation (based on the Auto Backup bluerpint), I can't find a way to configure the Location - it defaults to HA's data disk (i.e., /root/backup). DO I have to manually add the location in the yaml? If so, how do I access the actual yaml? When I select "Edit in YAML", all I see is the barebones blueprint YAML:

alias: Automatic Backups
description: using the Auto Backup HACS integration
use_blueprint:
  path: jcwillox/automatic_backups.yaml
  input:
    backup_time: "02:00:00"
    enable_yearly: false

When I view the automation's traces I can see much more detailed YAML, but I can't edit it.

Thanks for assistance.

 

Does anyone have any experience with self-cleaning cat litter boxes? I'm curious if any particular model of self-cleaning litter box is any good. We now have 4 cats and it would be nice to not have to clean litter boxes manually 1-2 times every day.

Do they separate pee/poop from litter well? Are cats afraid to use them? Do they stink more than regular litter boxes because pee/poop are in them for longer periods? Are they a hassle to clean? Do you have to buy propietary supplies (custom litter? special trays?)?

Thanks for your input.

 

I bought an old iPad2 for the purpose of viewing a Home Assistant dashboard via a web browser. My thinking was that the ability to browse the web was the sole requirement for a tablet for this purpose, but I was wrong: Home Assistant's web pages apparently require a newer version of javascript than iOS 9.3.5 can handle, but the iPad 2 can only be updated to iOS 9.3.5.

So is it possible to flash a newer OS (e.g., linux) onto an old iPad 2? ChatGPT says it's not possible because a bootloader exploit for the iPad 2 isn't known, but ChatGPT is often wrong.

view more: next ›