General Programming Discussion

7699 readers
1 users here now

A general programming discussion community.

Rules:

  1. Be civil.
  2. Please start discussions that spark conversation

Other communities

Systems

Functional Programming

Also related

founded 5 years ago
MODERATORS
1
1
submitted 2 days ago* (last edited 2 days ago) by [email protected] to c/[email protected]
 
 

I'm really bad at keeping my dependencies up-to-date manually, so dependabot was great for me. I don't use github anymore though, and I haven't really been able to find a good alternative.

I found Snyk, which seems to do that, but they only allow logging in with 3rd party providers which I'm not a big fan of.

Edit: seems like Snyk also only supports a few git hosts, and Codeberg isn't one of them.

2
3
4
5
6
 
 

Taking accurate screenshots with Puppeteer has been a real pain, especially with pages that don’t fully load when the standard waitUntil: load fires. A real pain. Some sites, particularly SPAs built with Angular, React, or Vue, end up half-loaded, leaving me with screenshots where parts of the page are just blank. Peachy, just peachy.

I've had the same issue with waitUntil: domcontentloaded, but that one was kind of expected. The problem is that the page load event fires too early, and for pages relying on JavaScript to load images or other resources, this means the screenshot captures a half-baked page. Useless, obviously.

After some digging accompanied by a certain type of language (the beep type), I did find a few workarounds. For example, you can use Puppeteer to wait for specific DOM elements to appear or disappear. Another approach is to wait for the network to be idle for a certain time. But what really helped was finding a custom function that waits for the DOM updates to settle (source). It’s the closest to a solution for getting fully loaded screenshots across different types of websites, at least from what I was able to find. Hope it will help anyone who struggles with this issue.

7
8
 
 

The Benefits of Technology Consulting for Small and Medium Enterprises

In the fast-paced digital landscape, small and medium enterprises (SMEs) face unique challenges in keeping up with technological advancements while managing limited resources. Technology consulting offers a strategic advantage to these businesses, providing expert guidance and tailored solutions that drive growth, efficiency, and innovation.

Cost-Effective Solutions One of the primary benefits of technology consulting for SMEs is the ability to access cost-effective solutions. Consultants help businesses identify and implement technology that fits their budget while maximizing impact. Whether it’s selecting the right software, optimizing existing systems, or migrating to cloud services, technology consultants ensure that SMEs get the most value for their investment without overspending.

Tailored Strategies Unlike large corporations, SMEs often require tailored technology strategies aligning with their needs and goals. Technology consultants work closely with business owners to understand their unique challenges and objectives. They then design and implement customized technology solutions that address these needs, from improving operational efficiency to enhancing customer engagement. This personalized approach ensures that the technology supports the business’s growth and competitiveness.

Access to Expertise SMEs may lack the in-house expertise needed to navigate complex technology landscapes. Technology consultants bring specialized knowledge and experience that SMEs can leverage to make informed decisions. They stay up-to-date with the latest technological trends and best practices, helping businesses adopt new tools and strategies that might otherwise be out of reach. This access to expertise enables SMEs to compete more effectively in their industry.

Improved Efficiency and Productivity Technology consulting can significantly improve efficiency and productivity for SMEs. Consultants analyze existing processes and recommend technologies that automate routine tasks, streamline operations, and reduce manual errors. For example, implementing a customer relationship management (CRM) system can help manage customer interactions more efficiently, while cloud-based tools can facilitate remote work and collaboration. These improvements free up valuable time and resources, allowing businesses to focus on growth and innovation.

Risk Management and Security In today’s digital world, cybersecurity is a critical concern for businesses of all sizes. Technology consultants help SMEs implement robust security measures to protect their data and systems from cyber threats. They assess potential risks, recommend best practices, and ensure compliance with industry standards. By safeguarding their digital assets, SMEs can operate with confidence and avoid costly security breaches.

Scalability and Future Growth As SMEs grow, their technology needs evolve. Technology consulting ensures that the solutions implemented today are scalable and can support future expansion. Consultants design flexible IT infrastructures that can adapt to changing business requirements, enabling SMEs to scale their operations without significant disruptions. This scalability is essential for sustaining long-term success in a competitive market.

Conclusion Technology consulting offers numerous benefits for small and medium enterprises, from cost-effective solutions and tailored strategies to improved efficiency and enhanced security. By leveraging the expertise of technology consultants, SMEs can overcome their unique challenges, drive innovation, and achieve sustainable growth in today’s rapidly evolving digital landscape.

9
10
11
12
 
 

I initially wrote this as a response to this joke post, but I think it deserves a separate post.

As a software engineer, I am deeply familiar with the concept of rubber duck debugging. It's fascinating how "just" (re-)phrasing a problem can open up path to a solution or shed light on own misconceptions or confusions. (As and aside, I find that among other things that have similar effect is writing commit messages, and also re-reading own code under a different "lighting": for instance, after I finish a branch and push it to GitLab, I will sometimes immediately go and review the code (or just the diff) in GitLab (as opposed to my terminal or editor) and sometimes realize new things.)

But another thing I've been realizing for some time is that these "a-ha" moments are always mixed feelings. Sure it's great I've been able to find the solution but it also feels like bit of a downer. I suspect that while crafting the question, I've been subconsciously also looking forward for the social interaction coming from asking that question. Suddenly belonging to a group of engineers having a crack at the problem.

The thing is: I don't get that with ChatGPT. I don't get that since there's was not going to be any social interaction to begin with.

With ChatGPT, I can do the rubber duck debugging thing without the sad part.

If no rubber duck debugging happens, and ChatGPT answers my question, then that's obvious, can move on.

If no rubber duck debugging happens, and ChatGPT fails to answer my question, then by the time at least I got some clarity about the problem which I can re-use to phrase my question with an actual community of peers, be it IRC channel, a Discord server or our team Slack channel.


So I'm wondering, do other people tend to use LLMs as these sort of interactive rubber ducks?

And as a bit of a stretch of this idea---could LLM be thought of as a tool to practice asking question, prior to actually asking real people?


PS: I should mention that I'm also not a native English speaker (which I guess is probably obvious by now by my writing) so part of my "learning asking question" is also learning it specifically in English.

13
 
 

I started writing this as an answer to someone on some discord, but it would not fit the channel topic, but I'd still love to see people's views on this.

So I'll quote the comment but just as a primer:

The safest pattern to use is to not use any pattern at all and write the most straight forward code. Apply patterns only when the simplest code is actually causing real problems.

First and foremost: Many paths to hell are paved with design patterns applied willy-nilly. (A funny aside: OO community seems to be more active and organized in describing them (and often not warning strongly enough about dangers of inheritance, the true lord of the pattern rings), which leads to the lower-level, simpler patterns being underrepresented.)

But, the other extreme is not without issues, by far.

I've seen too many FastAPI endpoints talking to db like there's no tomorrow. That is definitely "straight forward" approach but the first problem is already there: it's pretty much untestable, and soon enough everyone is coupling to random DB columns (and making random assumptions about their content, usually based on "well let's see who writes what there" analysis) which makes it hard to change without playing a whack-a-bug.

And what? Our initial DB design was not future proof? Tough luck changing it now. So new endpoints will actually be trying to make up for the obsolete schema, using pandas everywhere to do what SQL or some storage layer (perhaps with some unit-of-work pattern) should be doing -- and further cementing in the obsolete design. Eventually it's close to impossible to know who writes/expects what, so now everyone better be defensive, adding even more cruft (and space for bugs).

My point is, I guess, that by the time when there are identifiable "real problems" to be solved by pattern, it's far too late.

Look, in general, postponing a decision to have more information can be a great strategy. But that depends on the quality of information you get by postponing. If that extra information is going to be just new features you added in the meantime, that is going to be heavily biased by the amount of defensive / making-up-for-bad-db junk that you forced yourself to keep adding. It's not necessarily going to be easier to see the right pattern.

So the tricky part is, which patterns are actually strong enough yet not necessarily obtrusive, so that you can start applying them early on? That's a million dollar question.

I don't think "straight forward" gets you towards answering that question. (Well, to be fair, I'm sure people have made $1M with "straight forward code", so that's that, but is that a good bet?)

(By the way, real world actually has a nice pattern specifically for getting out of that hole, and it's called "your competitor moving faster & being cheaper than you" so in a healthy market the problem should solve itself eventually...)


So what are your ideas? Do you have design patterns / disciplines that you tend to apply generally, with new projects?

I'm not looking for actual patterns (although it's fine to suggest your favorites, or link to resources), I'm mainly interested in what do people think about patterns in general, and how to apply them during the lifetime of the project.

14
15
16
17
18
 
 

With performance optimizations seemingly having lost their relevance in an era of ever-increasing hardware performance, there are still many good reasons to spend some time optimizing code. In a recent preprint article by [Paul Bilokon] and [Burak Gunduz] of the Imperial College London the focus is specifically on low-latency patterns.

19
 
 

cross-posted from: https://lemmy.ml/post/17978313

Shameless plug: I am the author.

20
 
 

Serious question. I know there are a lot of memes about microservices, both advocating and against it. And jokes from devs who go and turn monoliths into microservices and then back again. For my line of work it isn't all that relevant, but a discussion I heard today made me wonder.

There were two camps in this discussion. One side said microservices are the future, all big companies are moving towards it, the entire industry is moving towards it. In their view, if it wasn't Mach architecture, it wasn't valid software. In their world both software they made themselves and software bought or licensed (SaaS) externally should be all microservices, api first, cloud-native and headless. The other camp said it was foolish to think this is actually what's happening in the industry and depending on where you look microservices are actually abandoned instead of moving towards. By demanding all software to be like this you are limiting what there is on offer. Furthermore the total cost of operation would be higher and connecting everything together in a coherent way is a nightmare. Instead of gaining flexibility, one can actually lose flexibility because changing interfaces could be very hard or even impossible with software not fully under your own control. They argued a lot of the benefits are only slight or even nonexistent and not required in the current age of day.

They asked what I thought and I had to confess I didn't really have an answer for them. I don't know what the industry is doing and I think whether or not to use microservices is highly dependent on the situation. I don't know if there is a universal answer.

Do you guys have any good thoughts on this? Are microservices the future, or just a fad which needs to be forgotten ASAP.

21
 
 

It’s been over a year since Khronos started working on the Vulkan SC Ecosystem. Now that the component stack has reached a high level of maturity, it seemed appropriate to write an article about the secret sauce behind the Vulkan SC Ecosystem components that enabled us to leverage the industry-proven Vulkan Ecosystem components to provide corresponding developer tooling for the safety-critical variant of the API.

Vulkan SC was released by the Khronos Group in 2022 as the first of the new generation of explicit APIs to target safety-critical systems. The Vulkan SC 1.0 specification is based on the Vulkan 1.2 API and aims to enable safety-critical application developers access to and detailed control of the graphics and compute capabilities of modern GPUs. In order to accomplish that, Vulkan SC removes functionality from Vulkan 1.2 that is not applicable, not relevant, or otherwise not essential for safety-critical markets, and tweaks the APIs to achieve even more deterministic and robust behavior to meet safety certification standards.

The Vulkan SC Ecosystem components, such as the ICD Loader and Validation Layers, are not safety certified software components themselves, rather, they are developer tools intended to be used by application developers writing safety-critical applications using the Vulkan SC API. Building on the success of the corresponding ecosystem components available for the Vulkan API, the goal for the Vulkan SC Ecosystem is to leverage the tremendous engineering effort that went (and still goes) into those in order to create a comparably comprehensive suite of developer tools for the safety-critical variant of the API, amended with additional features specific to Vulkan SC. Reaching that goal, however, came with its own set of challenges…

22
 
 

Performance optimisation matters when you are trying to get your application working in a resource-constrained environment. This is typically the case in embedded but also in some desktop scenarious you may run short on resources so it’s not a matter without significance on desktop either.

What we mean by performance here is the ability to get the application running to fulfill its purpose, in practice typically meaning sufficient fps in the UI and meeting other nonfunctional requirements, such as startup time, memory consumption and CPU/GPU load.

There have been a number of discussions on Qt performance aspects and as we have been working on a number of related items we thought now could be a good time to provide a summary of all the activities and tools we have. You can optimise the performance of your application by utilising them and also use them in testing. We have been working on improving existing performance tools as well as adding new ones and providing guidelines, so let’s look at the latest additions. This post is starting a stream of blog posts to help you with performance optimisation and provide a view to our activities in this area.

23
24
 
 

Hey, just uploading here at lemmy, so I can link an attachment to a Godot forum (new users to godot forum can't upload attachments).

But, if anyone here is a Godoteneer or programmer, I'd love to hear your ideas.

The attached demo is a simulation I did on Blender showing the type of interaction I mean. The green wire mesh represents a vertical plane that has the horizontal edge shaped by a bezier curve. The ball has a physics simulation on the up and down (z axis) with only horizontal movement on the (let's call it the c-axis). So, all linear movement on the c-axis is translated to a x and y position using the bezier plane.

My initial thoughts on how I might approach such a setup would be to create all nodes in 3d space except for the physics shapes. This might allow using the 2d physics shape but keep the 3d model meshes and lights. But the it would have to be a 2d RigidBody, and I'm not sure if I could translate a 2d position space to a 3d position space ... I think I need a broader understanding of the core code of Godot. In the past, I've been successful in a c++ Godot module, then later refactoring it to a GDExtension. But that was needed for a high performant line gesture engine. I haven't yet delved too much into the guts of Godot.

Any pointers? Or inspired thoughts?

25
 
 

Codevis is a Large Scale software visualizer, focused on C++ codebases. it can help you identify issues and smells in your codebase. It also has an extensive plugin interface and some preliminary scripting support.

Features:

  • Generate a Visualization from Pre-Existing code
  • Generate architectural code from a visualization
  • Plugin System that allows you to add missing features
  • Architectural linters (not just code linters)
  • DBus support
view more: next ›